00:00:00.001 Started by upstream project "autotest-per-patch" build number 132791 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.148 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.149 The recommended git tool is: git 00:00:00.149 using credential 00000000-0000-0000-0000-000000000002 00:00:00.150 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.183 Fetching changes from the remote Git repository 00:00:00.184 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.240 Using shallow fetch with depth 1 00:00:00.240 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.240 > git --version # timeout=10 00:00:00.283 > git --version # 'git version 2.39.2' 00:00:00.283 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.311 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.311 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.657 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.666 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.676 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.676 > git config core.sparsecheckout # timeout=10 00:00:07.686 > git read-tree -mu HEAD # timeout=10 00:00:07.700 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.719 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.719 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.829 [Pipeline] Start of Pipeline 00:00:07.869 [Pipeline] library 00:00:07.871 Loading library shm_lib@master 00:00:07.871 Library shm_lib@master is cached. Copying from home. 00:00:07.910 [Pipeline] node 00:00:07.920 Running on WFP6 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.922 [Pipeline] { 00:00:07.931 [Pipeline] catchError 00:00:07.932 [Pipeline] { 00:00:07.944 [Pipeline] wrap 00:00:07.952 [Pipeline] { 00:00:07.961 [Pipeline] stage 00:00:07.962 [Pipeline] { (Prologue) 00:00:08.253 [Pipeline] sh 00:00:08.544 + logger -p user.info -t JENKINS-CI 00:00:08.563 [Pipeline] echo 00:00:08.565 Node: WFP6 00:00:08.571 [Pipeline] sh 00:00:08.954 [Pipeline] setCustomBuildProperty 00:00:08.964 [Pipeline] echo 00:00:08.965 Cleanup processes 00:00:08.969 [Pipeline] sh 00:00:09.261 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.261 2963916 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.274 [Pipeline] sh 00:00:09.562 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.562 ++ grep -v 'sudo pgrep' 00:00:09.562 ++ awk '{print $1}' 00:00:09.562 + sudo kill -9 00:00:09.562 + true 00:00:09.574 [Pipeline] cleanWs 00:00:09.583 [WS-CLEANUP] Deleting project workspace... 00:00:09.583 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.590 [WS-CLEANUP] done 00:00:09.593 [Pipeline] setCustomBuildProperty 00:00:09.603 [Pipeline] sh 00:00:09.885 + sudo git config --global --replace-all safe.directory '*' 00:00:10.009 [Pipeline] httpRequest 00:00:11.577 [Pipeline] echo 00:00:11.579 Sorcerer 10.211.164.112 is alive 00:00:11.588 [Pipeline] retry 00:00:11.590 [Pipeline] { 00:00:11.604 [Pipeline] httpRequest 00:00:11.608 HttpMethod: GET 00:00:11.608 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.609 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.621 Response Code: HTTP/1.1 200 OK 00:00:11.621 Success: Status code 200 is in the accepted range: 200,404 00:00:11.622 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.920 [Pipeline] } 00:00:12.936 [Pipeline] // retry 00:00:12.942 [Pipeline] sh 00:00:13.228 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.244 [Pipeline] httpRequest 00:00:13.806 [Pipeline] echo 00:00:13.808 Sorcerer 10.211.164.112 is alive 00:00:13.818 [Pipeline] retry 00:00:13.819 [Pipeline] { 00:00:13.833 [Pipeline] httpRequest 00:00:13.838 HttpMethod: GET 00:00:13.838 URL: http://10.211.164.112/packages/spdk_3fe025922a49eaef317768257b5300ea7455cdab.tar.gz 00:00:13.839 Sending request to url: http://10.211.164.112/packages/spdk_3fe025922a49eaef317768257b5300ea7455cdab.tar.gz 00:00:13.858 Response Code: HTTP/1.1 200 OK 00:00:13.858 Success: Status code 200 is in the accepted range: 200,404 00:00:13.858 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_3fe025922a49eaef317768257b5300ea7455cdab.tar.gz 00:01:11.128 [Pipeline] } 00:01:11.147 [Pipeline] // retry 00:01:11.154 [Pipeline] sh 00:01:11.440 + tar --no-same-owner -xf spdk_3fe025922a49eaef317768257b5300ea7455cdab.tar.gz 00:01:13.990 [Pipeline] sh 00:01:14.277 + git -C spdk log --oneline -n5 00:01:14.277 3fe025922 env: handle possible DPDK errors in mem_map_init 00:01:14.277 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:01:14.277 496bfd677 env: match legacy mem mode config with DPDK 00:01:14.277 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:14.277 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:14.288 [Pipeline] } 00:01:14.301 [Pipeline] // stage 00:01:14.310 [Pipeline] stage 00:01:14.312 [Pipeline] { (Prepare) 00:01:14.328 [Pipeline] writeFile 00:01:14.343 [Pipeline] sh 00:01:14.630 + logger -p user.info -t JENKINS-CI 00:01:14.642 [Pipeline] sh 00:01:14.930 + logger -p user.info -t JENKINS-CI 00:01:14.943 [Pipeline] sh 00:01:15.231 + cat autorun-spdk.conf 00:01:15.231 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.231 SPDK_TEST_NVMF=1 00:01:15.231 SPDK_TEST_NVME_CLI=1 00:01:15.231 SPDK_TEST_NVMF_NICS=mlx5 00:01:15.231 SPDK_RUN_UBSAN=1 00:01:15.231 NET_TYPE=phy 00:01:15.238 RUN_NIGHTLY=0 00:01:15.243 [Pipeline] readFile 00:01:15.272 [Pipeline] withEnv 00:01:15.274 [Pipeline] { 00:01:15.287 [Pipeline] sh 00:01:15.575 + set -ex 00:01:15.575 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:15.575 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:15.575 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.575 ++ SPDK_TEST_NVMF=1 00:01:15.575 ++ SPDK_TEST_NVME_CLI=1 00:01:15.575 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:15.575 ++ SPDK_RUN_UBSAN=1 00:01:15.575 ++ NET_TYPE=phy 00:01:15.575 ++ RUN_NIGHTLY=0 00:01:15.575 + case $SPDK_TEST_NVMF_NICS in 00:01:15.575 + DRIVERS=mlx5_ib 00:01:15.575 + [[ -n mlx5_ib ]] 00:01:15.575 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:15.575 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:15.575 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:15.575 rmmod: ERROR: Module irdma is not currently loaded 00:01:15.575 rmmod: ERROR: Module i40iw is not currently loaded 00:01:15.575 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:15.575 + true 00:01:15.575 + for D in $DRIVERS 00:01:15.575 + sudo modprobe mlx5_ib 00:01:15.834 + exit 0 00:01:15.844 [Pipeline] } 00:01:15.859 [Pipeline] // withEnv 00:01:15.865 [Pipeline] } 00:01:15.879 [Pipeline] // stage 00:01:15.889 [Pipeline] catchError 00:01:15.891 [Pipeline] { 00:01:15.906 [Pipeline] timeout 00:01:15.906 Timeout set to expire in 1 hr 0 min 00:01:15.908 [Pipeline] { 00:01:15.923 [Pipeline] stage 00:01:15.925 [Pipeline] { (Tests) 00:01:15.939 [Pipeline] sh 00:01:16.226 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:16.226 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:16.226 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:16.226 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:16.226 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:16.226 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:16.227 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:16.227 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:16.227 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:16.227 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:16.227 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:16.227 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:16.227 + source /etc/os-release 00:01:16.227 ++ NAME='Fedora Linux' 00:01:16.227 ++ VERSION='39 (Cloud Edition)' 00:01:16.227 ++ ID=fedora 00:01:16.227 ++ VERSION_ID=39 00:01:16.227 ++ VERSION_CODENAME= 00:01:16.227 ++ PLATFORM_ID=platform:f39 00:01:16.227 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:16.227 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.227 ++ LOGO=fedora-logo-icon 00:01:16.227 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:16.227 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.227 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:16.227 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.227 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.227 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.227 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:16.227 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.227 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:16.227 ++ SUPPORT_END=2024-11-12 00:01:16.227 ++ VARIANT='Cloud Edition' 00:01:16.227 ++ VARIANT_ID=cloud 00:01:16.227 + uname -a 00:01:16.227 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:16.227 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:18.768 Hugepages 00:01:18.768 node hugesize free / total 00:01:18.768 node0 1048576kB 0 / 0 00:01:18.768 node0 2048kB 0 / 0 00:01:18.768 node1 1048576kB 0 / 0 00:01:18.768 node1 2048kB 0 / 0 00:01:18.768 00:01:18.768 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.768 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:18.768 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:18.768 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:18.768 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:18.768 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:18.768 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:18.768 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:18.768 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:18.768 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:18.768 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:18.768 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:18.768 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:18.768 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:18.768 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:18.768 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:18.768 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:18.768 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:18.768 + rm -f /tmp/spdk-ld-path 00:01:18.768 + source autorun-spdk.conf 00:01:18.768 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.768 ++ SPDK_TEST_NVMF=1 00:01:18.768 ++ SPDK_TEST_NVME_CLI=1 00:01:18.768 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:18.768 ++ SPDK_RUN_UBSAN=1 00:01:18.768 ++ NET_TYPE=phy 00:01:18.768 ++ RUN_NIGHTLY=0 00:01:18.768 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.768 + [[ -n '' ]] 00:01:18.769 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:18.769 + for M in /var/spdk/build-*-manifest.txt 00:01:18.769 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:18.769 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:18.769 + for M in /var/spdk/build-*-manifest.txt 00:01:18.769 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.769 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:18.769 + for M in /var/spdk/build-*-manifest.txt 00:01:18.769 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.769 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:18.769 ++ uname 00:01:18.769 + [[ Linux == \L\i\n\u\x ]] 00:01:18.769 + sudo dmesg -T 00:01:19.028 + sudo dmesg --clear 00:01:19.028 + dmesg_pid=2964855 00:01:19.028 + [[ Fedora Linux == FreeBSD ]] 00:01:19.028 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.028 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.028 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.028 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.028 + sudo dmesg -Tw 00:01:19.028 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.028 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.028 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.028 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.028 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.028 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.028 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.028 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.028 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.028 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.028 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.028 11:40:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:19.028 11:40:26 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.028 11:40:26 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.029 11:40:26 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:19.029 11:40:26 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:19.029 11:40:26 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:19.029 11:40:26 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:19.029 11:40:26 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:01:19.029 11:40:26 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:01:19.029 11:40:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:19.029 11:40:26 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.029 11:40:27 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:19.029 11:40:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:19.029 11:40:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:19.029 11:40:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.029 11:40:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.029 11:40:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.029 11:40:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.029 11:40:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.029 11:40:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.029 11:40:27 -- paths/export.sh@5 -- $ export PATH 00:01:19.029 11:40:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.029 11:40:27 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:19.029 11:40:27 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:19.029 11:40:27 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733740827.XXXXXX 00:01:19.029 11:40:27 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733740827.EySXSp 00:01:19.029 11:40:27 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:19.029 11:40:27 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:19.029 11:40:27 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:19.029 11:40:27 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:19.029 11:40:27 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.029 11:40:27 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:19.029 11:40:27 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:19.029 11:40:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.029 11:40:27 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:19.029 11:40:27 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:19.029 11:40:27 -- pm/common@17 -- $ local monitor 00:01:19.029 11:40:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.029 11:40:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.029 11:40:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.029 11:40:27 -- pm/common@21 -- $ date +%s 00:01:19.029 11:40:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.029 11:40:27 -- pm/common@21 -- $ date +%s 00:01:19.029 11:40:27 -- pm/common@25 -- $ sleep 1 00:01:19.029 11:40:27 -- pm/common@21 -- $ date +%s 00:01:19.029 11:40:27 -- pm/common@21 -- $ date +%s 00:01:19.029 11:40:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733740827 00:01:19.029 11:40:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733740827 00:01:19.289 11:40:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733740827 00:01:19.289 11:40:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733740827 00:01:19.289 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733740827_collect-vmstat.pm.log 00:01:19.289 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733740827_collect-cpu-load.pm.log 00:01:19.289 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733740827_collect-cpu-temp.pm.log 00:01:19.289 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733740827_collect-bmc-pm.bmc.pm.log 00:01:20.228 11:40:28 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:20.228 11:40:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.228 11:40:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.228 11:40:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:20.228 11:40:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.228 Mon Dec 9 10:40:28 AM UTC 2024 00:01:20.228 11:40:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.228 v25.01-pre-314-g3fe025922 00:01:20.228 11:40:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:20.228 11:40:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.228 11:40:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.228 11:40:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.228 11:40:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.228 11:40:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.228 ************************************ 00:01:20.228 START TEST ubsan 00:01:20.228 ************************************ 00:01:20.228 11:40:28 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:20.228 using ubsan 00:01:20.228 00:01:20.228 real 0m0.000s 00:01:20.228 user 0m0.000s 00:01:20.228 sys 0m0.000s 00:01:20.228 11:40:28 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.228 11:40:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.228 ************************************ 00:01:20.228 END TEST ubsan 00:01:20.228 ************************************ 00:01:20.228 11:40:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.228 11:40:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.228 11:40:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.228 11:40:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.228 11:40:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.228 11:40:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.228 11:40:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.228 11:40:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.228 11:40:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:20.228 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:20.228 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:20.796 Using 'verbs' RDMA provider 00:01:33.589 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:45.809 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:45.809 Creating mk/config.mk...done. 00:01:45.809 Creating mk/cc.flags.mk...done. 00:01:45.809 Type 'make' to build. 00:01:45.809 11:40:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:45.809 11:40:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:45.809 11:40:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:45.809 11:40:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.809 ************************************ 00:01:45.809 START TEST make 00:01:45.809 ************************************ 00:01:45.809 11:40:53 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:46.068 make[1]: Nothing to be done for 'all'. 00:01:54.196 The Meson build system 00:01:54.196 Version: 1.5.0 00:01:54.196 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:54.196 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:54.196 Build type: native build 00:01:54.196 Program cat found: YES (/usr/bin/cat) 00:01:54.196 Project name: DPDK 00:01:54.196 Project version: 24.03.0 00:01:54.196 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.196 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.196 Host machine cpu family: x86_64 00:01:54.196 Host machine cpu: x86_64 00:01:54.196 Message: ## Building in Developer Mode ## 00:01:54.196 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.196 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.196 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.196 Program python3 found: YES (/usr/bin/python3) 00:01:54.196 Program cat found: YES (/usr/bin/cat) 00:01:54.196 Compiler for C supports arguments -march=native: YES 00:01:54.196 Checking for size of "void *" : 8 00:01:54.196 Checking for size of "void *" : 8 (cached) 00:01:54.196 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:54.196 Library m found: YES 00:01:54.196 Library numa found: YES 00:01:54.196 Has header "numaif.h" : YES 00:01:54.196 Library fdt found: NO 00:01:54.196 Library execinfo found: NO 00:01:54.196 Has header "execinfo.h" : YES 00:01:54.196 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.196 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.196 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.196 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.196 Run-time dependency openssl found: YES 3.1.1 00:01:54.196 Run-time dependency libpcap found: YES 1.10.4 00:01:54.196 Has header "pcap.h" with dependency libpcap: YES 00:01:54.196 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.196 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.196 Compiler for C supports arguments -Wformat: YES 00:01:54.196 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.196 Compiler for C supports arguments -Wformat-security: NO 00:01:54.196 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.196 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.196 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.196 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.196 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.196 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.196 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.196 Compiler for C supports arguments -Wundef: YES 00:01:54.196 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.196 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.196 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.196 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.196 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.196 Program objdump found: YES (/usr/bin/objdump) 00:01:54.196 Compiler for C supports arguments -mavx512f: YES 00:01:54.196 Checking if "AVX512 checking" compiles: YES 00:01:54.196 Fetching value of define "__SSE4_2__" : 1 00:01:54.196 Fetching value of define "__AES__" : 1 00:01:54.196 Fetching value of define "__AVX__" : 1 00:01:54.196 Fetching value of define "__AVX2__" : 1 00:01:54.196 Fetching value of define "__AVX512BW__" : 1 00:01:54.196 Fetching value of define "__AVX512CD__" : 1 00:01:54.196 Fetching value of define "__AVX512DQ__" : 1 00:01:54.196 Fetching value of define "__AVX512F__" : 1 00:01:54.196 Fetching value of define "__AVX512VL__" : 1 00:01:54.196 Fetching value of define "__PCLMUL__" : 1 00:01:54.196 Fetching value of define "__RDRND__" : 1 00:01:54.196 Fetching value of define "__RDSEED__" : 1 00:01:54.196 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.196 Fetching value of define "__znver1__" : (undefined) 00:01:54.196 Fetching value of define "__znver2__" : (undefined) 00:01:54.196 Fetching value of define "__znver3__" : (undefined) 00:01:54.196 Fetching value of define "__znver4__" : (undefined) 00:01:54.196 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.196 Message: lib/log: Defining dependency "log" 00:01:54.196 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.196 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.196 Checking for function "getentropy" : NO 00:01:54.196 Message: lib/eal: Defining dependency "eal" 00:01:54.196 Message: lib/ring: Defining dependency "ring" 00:01:54.196 Message: lib/rcu: Defining dependency "rcu" 00:01:54.196 Message: lib/mempool: Defining dependency "mempool" 00:01:54.196 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.196 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.196 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:54.196 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:54.196 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:54.196 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:54.196 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:54.196 Compiler for C supports arguments -mpclmul: YES 00:01:54.196 Compiler for C supports arguments -maes: YES 00:01:54.196 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.196 Compiler for C supports arguments -mavx512bw: YES 00:01:54.196 Compiler for C supports arguments -mavx512dq: YES 00:01:54.196 Compiler for C supports arguments -mavx512vl: YES 00:01:54.196 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.196 Compiler for C supports arguments -mavx2: YES 00:01:54.196 Compiler for C supports arguments -mavx: YES 00:01:54.196 Message: lib/net: Defining dependency "net" 00:01:54.196 Message: lib/meter: Defining dependency "meter" 00:01:54.196 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.196 Message: lib/pci: Defining dependency "pci" 00:01:54.196 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.196 Message: lib/hash: Defining dependency "hash" 00:01:54.196 Message: lib/timer: Defining dependency "timer" 00:01:54.196 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.196 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.196 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.196 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.196 Message: lib/power: Defining dependency "power" 00:01:54.196 Message: lib/reorder: Defining dependency "reorder" 00:01:54.196 Message: lib/security: Defining dependency "security" 00:01:54.196 Has header "linux/userfaultfd.h" : YES 00:01:54.197 Has header "linux/vduse.h" : YES 00:01:54.197 Message: lib/vhost: Defining dependency "vhost" 00:01:54.197 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.197 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.197 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.197 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.197 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.197 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.197 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.197 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.197 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.197 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.197 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:54.197 Configuring doxy-api-html.conf using configuration 00:01:54.197 Configuring doxy-api-man.conf using configuration 00:01:54.197 Program mandb found: YES (/usr/bin/mandb) 00:01:54.197 Program sphinx-build found: NO 00:01:54.197 Configuring rte_build_config.h using configuration 00:01:54.197 Message: 00:01:54.197 ================= 00:01:54.197 Applications Enabled 00:01:54.197 ================= 00:01:54.197 00:01:54.197 apps: 00:01:54.197 00:01:54.197 00:01:54.197 Message: 00:01:54.197 ================= 00:01:54.197 Libraries Enabled 00:01:54.197 ================= 00:01:54.197 00:01:54.197 libs: 00:01:54.197 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.197 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.197 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.197 00:01:54.197 Message: 00:01:54.197 =============== 00:01:54.197 Drivers Enabled 00:01:54.197 =============== 00:01:54.197 00:01:54.197 common: 00:01:54.197 00:01:54.197 bus: 00:01:54.197 pci, vdev, 00:01:54.197 mempool: 00:01:54.197 ring, 00:01:54.197 dma: 00:01:54.197 00:01:54.197 net: 00:01:54.197 00:01:54.197 crypto: 00:01:54.197 00:01:54.197 compress: 00:01:54.197 00:01:54.197 vdpa: 00:01:54.197 00:01:54.197 00:01:54.197 Message: 00:01:54.197 ================= 00:01:54.197 Content Skipped 00:01:54.197 ================= 00:01:54.197 00:01:54.197 apps: 00:01:54.197 dumpcap: explicitly disabled via build config 00:01:54.197 graph: explicitly disabled via build config 00:01:54.197 pdump: explicitly disabled via build config 00:01:54.197 proc-info: explicitly disabled via build config 00:01:54.197 test-acl: explicitly disabled via build config 00:01:54.197 test-bbdev: explicitly disabled via build config 00:01:54.197 test-cmdline: explicitly disabled via build config 00:01:54.197 test-compress-perf: explicitly disabled via build config 00:01:54.197 test-crypto-perf: explicitly disabled via build config 00:01:54.197 test-dma-perf: explicitly disabled via build config 00:01:54.197 test-eventdev: explicitly disabled via build config 00:01:54.197 test-fib: explicitly disabled via build config 00:01:54.197 test-flow-perf: explicitly disabled via build config 00:01:54.197 test-gpudev: explicitly disabled via build config 00:01:54.197 test-mldev: explicitly disabled via build config 00:01:54.197 test-pipeline: explicitly disabled via build config 00:01:54.197 test-pmd: explicitly disabled via build config 00:01:54.197 test-regex: explicitly disabled via build config 00:01:54.197 test-sad: explicitly disabled via build config 00:01:54.197 test-security-perf: explicitly disabled via build config 00:01:54.197 00:01:54.197 libs: 00:01:54.197 argparse: explicitly disabled via build config 00:01:54.197 metrics: explicitly disabled via build config 00:01:54.197 acl: explicitly disabled via build config 00:01:54.197 bbdev: explicitly disabled via build config 00:01:54.197 bitratestats: explicitly disabled via build config 00:01:54.197 bpf: explicitly disabled via build config 00:01:54.197 cfgfile: explicitly disabled via build config 00:01:54.197 distributor: explicitly disabled via build config 00:01:54.197 efd: explicitly disabled via build config 00:01:54.197 eventdev: explicitly disabled via build config 00:01:54.197 dispatcher: explicitly disabled via build config 00:01:54.197 gpudev: explicitly disabled via build config 00:01:54.197 gro: explicitly disabled via build config 00:01:54.197 gso: explicitly disabled via build config 00:01:54.197 ip_frag: explicitly disabled via build config 00:01:54.197 jobstats: explicitly disabled via build config 00:01:54.197 latencystats: explicitly disabled via build config 00:01:54.197 lpm: explicitly disabled via build config 00:01:54.197 member: explicitly disabled via build config 00:01:54.197 pcapng: explicitly disabled via build config 00:01:54.197 rawdev: explicitly disabled via build config 00:01:54.197 regexdev: explicitly disabled via build config 00:01:54.197 mldev: explicitly disabled via build config 00:01:54.197 rib: explicitly disabled via build config 00:01:54.197 sched: explicitly disabled via build config 00:01:54.197 stack: explicitly disabled via build config 00:01:54.197 ipsec: explicitly disabled via build config 00:01:54.197 pdcp: explicitly disabled via build config 00:01:54.197 fib: explicitly disabled via build config 00:01:54.197 port: explicitly disabled via build config 00:01:54.197 pdump: explicitly disabled via build config 00:01:54.197 table: explicitly disabled via build config 00:01:54.197 pipeline: explicitly disabled via build config 00:01:54.197 graph: explicitly disabled via build config 00:01:54.197 node: explicitly disabled via build config 00:01:54.197 00:01:54.197 drivers: 00:01:54.197 common/cpt: not in enabled drivers build config 00:01:54.197 common/dpaax: not in enabled drivers build config 00:01:54.197 common/iavf: not in enabled drivers build config 00:01:54.197 common/idpf: not in enabled drivers build config 00:01:54.197 common/ionic: not in enabled drivers build config 00:01:54.197 common/mvep: not in enabled drivers build config 00:01:54.197 common/octeontx: not in enabled drivers build config 00:01:54.197 bus/auxiliary: not in enabled drivers build config 00:01:54.197 bus/cdx: not in enabled drivers build config 00:01:54.197 bus/dpaa: not in enabled drivers build config 00:01:54.197 bus/fslmc: not in enabled drivers build config 00:01:54.197 bus/ifpga: not in enabled drivers build config 00:01:54.197 bus/platform: not in enabled drivers build config 00:01:54.197 bus/uacce: not in enabled drivers build config 00:01:54.197 bus/vmbus: not in enabled drivers build config 00:01:54.197 common/cnxk: not in enabled drivers build config 00:01:54.197 common/mlx5: not in enabled drivers build config 00:01:54.197 common/nfp: not in enabled drivers build config 00:01:54.197 common/nitrox: not in enabled drivers build config 00:01:54.197 common/qat: not in enabled drivers build config 00:01:54.197 common/sfc_efx: not in enabled drivers build config 00:01:54.197 mempool/bucket: not in enabled drivers build config 00:01:54.197 mempool/cnxk: not in enabled drivers build config 00:01:54.197 mempool/dpaa: not in enabled drivers build config 00:01:54.197 mempool/dpaa2: not in enabled drivers build config 00:01:54.197 mempool/octeontx: not in enabled drivers build config 00:01:54.197 mempool/stack: not in enabled drivers build config 00:01:54.197 dma/cnxk: not in enabled drivers build config 00:01:54.197 dma/dpaa: not in enabled drivers build config 00:01:54.197 dma/dpaa2: not in enabled drivers build config 00:01:54.197 dma/hisilicon: not in enabled drivers build config 00:01:54.197 dma/idxd: not in enabled drivers build config 00:01:54.197 dma/ioat: not in enabled drivers build config 00:01:54.197 dma/skeleton: not in enabled drivers build config 00:01:54.197 net/af_packet: not in enabled drivers build config 00:01:54.197 net/af_xdp: not in enabled drivers build config 00:01:54.197 net/ark: not in enabled drivers build config 00:01:54.197 net/atlantic: not in enabled drivers build config 00:01:54.197 net/avp: not in enabled drivers build config 00:01:54.197 net/axgbe: not in enabled drivers build config 00:01:54.197 net/bnx2x: not in enabled drivers build config 00:01:54.197 net/bnxt: not in enabled drivers build config 00:01:54.197 net/bonding: not in enabled drivers build config 00:01:54.197 net/cnxk: not in enabled drivers build config 00:01:54.197 net/cpfl: not in enabled drivers build config 00:01:54.197 net/cxgbe: not in enabled drivers build config 00:01:54.197 net/dpaa: not in enabled drivers build config 00:01:54.197 net/dpaa2: not in enabled drivers build config 00:01:54.197 net/e1000: not in enabled drivers build config 00:01:54.197 net/ena: not in enabled drivers build config 00:01:54.197 net/enetc: not in enabled drivers build config 00:01:54.197 net/enetfec: not in enabled drivers build config 00:01:54.197 net/enic: not in enabled drivers build config 00:01:54.197 net/failsafe: not in enabled drivers build config 00:01:54.197 net/fm10k: not in enabled drivers build config 00:01:54.197 net/gve: not in enabled drivers build config 00:01:54.197 net/hinic: not in enabled drivers build config 00:01:54.197 net/hns3: not in enabled drivers build config 00:01:54.197 net/i40e: not in enabled drivers build config 00:01:54.197 net/iavf: not in enabled drivers build config 00:01:54.197 net/ice: not in enabled drivers build config 00:01:54.197 net/idpf: not in enabled drivers build config 00:01:54.197 net/igc: not in enabled drivers build config 00:01:54.197 net/ionic: not in enabled drivers build config 00:01:54.197 net/ipn3ke: not in enabled drivers build config 00:01:54.197 net/ixgbe: not in enabled drivers build config 00:01:54.197 net/mana: not in enabled drivers build config 00:01:54.197 net/memif: not in enabled drivers build config 00:01:54.197 net/mlx4: not in enabled drivers build config 00:01:54.197 net/mlx5: not in enabled drivers build config 00:01:54.197 net/mvneta: not in enabled drivers build config 00:01:54.197 net/mvpp2: not in enabled drivers build config 00:01:54.197 net/netvsc: not in enabled drivers build config 00:01:54.197 net/nfb: not in enabled drivers build config 00:01:54.197 net/nfp: not in enabled drivers build config 00:01:54.197 net/ngbe: not in enabled drivers build config 00:01:54.197 net/null: not in enabled drivers build config 00:01:54.197 net/octeontx: not in enabled drivers build config 00:01:54.197 net/octeon_ep: not in enabled drivers build config 00:01:54.197 net/pcap: not in enabled drivers build config 00:01:54.197 net/pfe: not in enabled drivers build config 00:01:54.197 net/qede: not in enabled drivers build config 00:01:54.197 net/ring: not in enabled drivers build config 00:01:54.198 net/sfc: not in enabled drivers build config 00:01:54.198 net/softnic: not in enabled drivers build config 00:01:54.198 net/tap: not in enabled drivers build config 00:01:54.198 net/thunderx: not in enabled drivers build config 00:01:54.198 net/txgbe: not in enabled drivers build config 00:01:54.198 net/vdev_netvsc: not in enabled drivers build config 00:01:54.198 net/vhost: not in enabled drivers build config 00:01:54.198 net/virtio: not in enabled drivers build config 00:01:54.198 net/vmxnet3: not in enabled drivers build config 00:01:54.198 raw/*: missing internal dependency, "rawdev" 00:01:54.198 crypto/armv8: not in enabled drivers build config 00:01:54.198 crypto/bcmfs: not in enabled drivers build config 00:01:54.198 crypto/caam_jr: not in enabled drivers build config 00:01:54.198 crypto/ccp: not in enabled drivers build config 00:01:54.198 crypto/cnxk: not in enabled drivers build config 00:01:54.198 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.198 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.198 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.198 crypto/mlx5: not in enabled drivers build config 00:01:54.198 crypto/mvsam: not in enabled drivers build config 00:01:54.198 crypto/nitrox: not in enabled drivers build config 00:01:54.198 crypto/null: not in enabled drivers build config 00:01:54.198 crypto/octeontx: not in enabled drivers build config 00:01:54.198 crypto/openssl: not in enabled drivers build config 00:01:54.198 crypto/scheduler: not in enabled drivers build config 00:01:54.198 crypto/uadk: not in enabled drivers build config 00:01:54.198 crypto/virtio: not in enabled drivers build config 00:01:54.198 compress/isal: not in enabled drivers build config 00:01:54.198 compress/mlx5: not in enabled drivers build config 00:01:54.198 compress/nitrox: not in enabled drivers build config 00:01:54.198 compress/octeontx: not in enabled drivers build config 00:01:54.198 compress/zlib: not in enabled drivers build config 00:01:54.198 regex/*: missing internal dependency, "regexdev" 00:01:54.198 ml/*: missing internal dependency, "mldev" 00:01:54.198 vdpa/ifc: not in enabled drivers build config 00:01:54.198 vdpa/mlx5: not in enabled drivers build config 00:01:54.198 vdpa/nfp: not in enabled drivers build config 00:01:54.198 vdpa/sfc: not in enabled drivers build config 00:01:54.198 event/*: missing internal dependency, "eventdev" 00:01:54.198 baseband/*: missing internal dependency, "bbdev" 00:01:54.198 gpu/*: missing internal dependency, "gpudev" 00:01:54.198 00:01:54.198 00:01:54.198 Build targets in project: 85 00:01:54.198 00:01:54.198 DPDK 24.03.0 00:01:54.198 00:01:54.198 User defined options 00:01:54.198 buildtype : debug 00:01:54.198 default_library : shared 00:01:54.198 libdir : lib 00:01:54.198 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:54.198 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.198 c_link_args : 00:01:54.198 cpu_instruction_set: native 00:01:54.198 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:54.198 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:54.198 enable_docs : false 00:01:54.198 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:54.198 enable_kmods : false 00:01:54.198 max_lcores : 128 00:01:54.198 tests : false 00:01:54.198 00:01:54.198 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.775 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:54.775 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.775 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.775 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.775 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.775 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.776 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.776 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.776 [8/268] Linking static target lib/librte_kvargs.a 00:01:54.776 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.776 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.776 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.776 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:55.039 [13/268] Linking static target lib/librte_log.a 00:01:55.039 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:55.039 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.039 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:55.039 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:55.039 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:55.039 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:55.039 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.039 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.039 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.039 [23/268] Linking static target lib/librte_pci.a 00:01:55.039 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:55.301 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:55.301 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.301 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.301 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.301 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.301 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.301 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.301 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.301 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.301 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.301 [35/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.301 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.301 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.301 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:55.301 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:55.301 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.301 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.301 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.301 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.301 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.301 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.301 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.301 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.301 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.301 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.301 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.301 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.301 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.301 [53/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.301 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:55.301 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:55.301 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.302 [57/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.302 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.302 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:55.302 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:55.560 [61/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.560 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.560 [63/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.561 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.561 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.561 [66/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.561 [67/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:55.561 [68/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.561 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.561 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.561 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.561 [72/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.561 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.561 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.561 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.561 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.561 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.561 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.561 [79/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.561 [80/268] Linking static target lib/librte_ring.a 00:01:55.561 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.561 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.561 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.561 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.561 [85/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:55.561 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.561 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.561 [88/268] Linking static target lib/librte_telemetry.a 00:01:55.561 [89/268] Linking static target lib/librte_meter.a 00:01:55.561 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:55.561 [91/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.561 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.561 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.561 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.561 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.561 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.561 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:55.561 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.561 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.561 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.561 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.561 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.561 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.561 [104/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.561 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.561 [106/268] Linking static target lib/librte_net.a 00:01:55.561 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.561 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.561 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.561 [110/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.561 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.561 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.561 [113/268] Linking static target lib/librte_mempool.a 00:01:55.561 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.561 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.561 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.561 [117/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.561 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.561 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.561 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.561 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.561 [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.561 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.561 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.561 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.561 [126/268] Linking static target lib/librte_cmdline.a 00:01:55.561 [127/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.561 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:55.561 [129/268] Linking static target lib/librte_rcu.a 00:01:55.561 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.561 [131/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.561 [132/268] Linking static target lib/librte_eal.a 00:01:55.561 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.819 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.819 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.819 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.819 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.819 [138/268] Linking target lib/librte_log.so.24.1 00:01:55.819 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.819 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.819 [141/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.819 [142/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.819 [143/268] Linking static target lib/librte_mbuf.a 00:01:55.819 [144/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.819 [145/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.819 [146/268] Linking static target lib/librte_timer.a 00:01:55.819 [147/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.819 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.819 [149/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.819 [150/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.819 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:55.819 [152/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.819 [153/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.819 [154/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.819 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.819 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.819 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.820 [158/268] Linking target lib/librte_kvargs.so.24.1 00:01:55.820 [159/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.820 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.820 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.820 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.820 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.820 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:55.820 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:56.079 [166/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.079 [167/268] Linking target lib/librte_telemetry.so.24.1 00:01:56.079 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:56.079 [169/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:56.079 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.079 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:56.079 [172/268] Linking static target lib/librte_dmadev.a 00:01:56.079 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.079 [174/268] Linking static target lib/librte_reorder.a 00:01:56.079 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:56.079 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:56.079 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:56.079 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:56.079 [179/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:56.079 [180/268] Linking static target lib/librte_security.a 00:01:56.079 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.079 [182/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:56.079 [183/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:56.079 [184/268] Linking static target lib/librte_compressdev.a 00:01:56.079 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.079 [186/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.079 [187/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.079 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.079 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:56.079 [190/268] Linking static target lib/librte_power.a 00:01:56.079 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.079 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:56.079 [193/268] Linking static target drivers/librte_bus_vdev.a 00:01:56.079 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:56.079 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:56.079 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.079 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.079 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.337 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.337 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.337 [201/268] Linking static target lib/librte_hash.a 00:01:56.337 [202/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.337 [203/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:56.337 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:56.337 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.337 [206/268] Linking static target lib/librte_cryptodev.a 00:01:56.337 [207/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.337 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.337 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.337 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:56.337 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.337 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.337 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.337 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:56.337 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.596 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.596 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.596 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.596 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.596 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.596 [221/268] Linking static target lib/librte_ethdev.a 00:01:56.856 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.856 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.856 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:57.114 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.114 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.114 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.050 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:58.050 [229/268] Linking static target lib/librte_vhost.a 00:01:58.309 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.687 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.957 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.525 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.783 [234/268] Linking target lib/librte_eal.so.24.1 00:02:05.783 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:05.783 [236/268] Linking target lib/librte_ring.so.24.1 00:02:05.783 [237/268] Linking target lib/librte_timer.so.24.1 00:02:05.783 [238/268] Linking target lib/librte_meter.so.24.1 00:02:05.783 [239/268] Linking target lib/librte_pci.so.24.1 00:02:05.783 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:05.783 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:06.042 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:06.042 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:06.042 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:06.042 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:06.042 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:06.042 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:06.042 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:06.042 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:06.042 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:06.042 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:06.301 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:06.301 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:06.301 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:06.301 [255/268] Linking target lib/librte_net.so.24.1 00:02:06.301 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:06.301 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:06.301 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:06.560 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:06.560 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:06.560 [261/268] Linking target lib/librte_security.so.24.1 00:02:06.560 [262/268] Linking target lib/librte_hash.so.24.1 00:02:06.560 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:06.560 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:06.819 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:06.819 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:06.819 [267/268] Linking target lib/librte_power.so.24.1 00:02:06.819 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:06.819 INFO: autodetecting backend as ninja 00:02:06.819 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:16.801 CC lib/ut_mock/mock.o 00:02:16.801 CC lib/log/log.o 00:02:16.801 CC lib/log/log_flags.o 00:02:16.801 CC lib/ut/ut.o 00:02:16.801 CC lib/log/log_deprecated.o 00:02:17.060 LIB libspdk_log.a 00:02:17.061 LIB libspdk_ut.a 00:02:17.061 LIB libspdk_ut_mock.a 00:02:17.061 SO libspdk_ut.so.2.0 00:02:17.061 SO libspdk_ut_mock.so.6.0 00:02:17.061 SO libspdk_log.so.7.1 00:02:17.061 SYMLINK libspdk_ut_mock.so 00:02:17.061 SYMLINK libspdk_ut.so 00:02:17.061 SYMLINK libspdk_log.so 00:02:17.319 CC lib/ioat/ioat.o 00:02:17.319 CXX lib/trace_parser/trace.o 00:02:17.319 CC lib/util/base64.o 00:02:17.319 CC lib/dma/dma.o 00:02:17.319 CC lib/util/bit_array.o 00:02:17.319 CC lib/util/cpuset.o 00:02:17.319 CC lib/util/crc16.o 00:02:17.319 CC lib/util/crc32.o 00:02:17.319 CC lib/util/crc32_ieee.o 00:02:17.319 CC lib/util/crc32c.o 00:02:17.319 CC lib/util/crc64.o 00:02:17.319 CC lib/util/dif.o 00:02:17.319 CC lib/util/fd.o 00:02:17.319 CC lib/util/fd_group.o 00:02:17.319 CC lib/util/file.o 00:02:17.319 CC lib/util/hexlify.o 00:02:17.319 CC lib/util/iov.o 00:02:17.319 CC lib/util/math.o 00:02:17.319 CC lib/util/net.o 00:02:17.319 CC lib/util/pipe.o 00:02:17.319 CC lib/util/strerror_tls.o 00:02:17.319 CC lib/util/string.o 00:02:17.319 CC lib/util/uuid.o 00:02:17.319 CC lib/util/xor.o 00:02:17.319 CC lib/util/zipf.o 00:02:17.319 CC lib/util/md5.o 00:02:17.577 CC lib/vfio_user/host/vfio_user_pci.o 00:02:17.577 CC lib/vfio_user/host/vfio_user.o 00:02:17.577 LIB libspdk_dma.a 00:02:17.577 SO libspdk_dma.so.5.0 00:02:17.577 LIB libspdk_ioat.a 00:02:17.836 SO libspdk_ioat.so.7.0 00:02:17.836 SYMLINK libspdk_dma.so 00:02:17.836 SYMLINK libspdk_ioat.so 00:02:17.836 LIB libspdk_vfio_user.a 00:02:17.836 SO libspdk_vfio_user.so.5.0 00:02:17.836 LIB libspdk_util.a 00:02:17.836 SYMLINK libspdk_vfio_user.so 00:02:18.095 SO libspdk_util.so.10.1 00:02:18.095 SYMLINK libspdk_util.so 00:02:18.095 LIB libspdk_trace_parser.a 00:02:18.095 SO libspdk_trace_parser.so.6.0 00:02:18.355 SYMLINK libspdk_trace_parser.so 00:02:18.355 CC lib/conf/conf.o 00:02:18.355 CC lib/json/json_parse.o 00:02:18.355 CC lib/json/json_util.o 00:02:18.355 CC lib/json/json_write.o 00:02:18.355 CC lib/idxd/idxd.o 00:02:18.355 CC lib/env_dpdk/env.o 00:02:18.355 CC lib/idxd/idxd_user.o 00:02:18.355 CC lib/rdma_utils/rdma_utils.o 00:02:18.355 CC lib/env_dpdk/memory.o 00:02:18.355 CC lib/vmd/vmd.o 00:02:18.355 CC lib/idxd/idxd_kernel.o 00:02:18.355 CC lib/env_dpdk/pci.o 00:02:18.355 CC lib/vmd/led.o 00:02:18.355 CC lib/env_dpdk/init.o 00:02:18.355 CC lib/env_dpdk/threads.o 00:02:18.355 CC lib/env_dpdk/pci_ioat.o 00:02:18.355 CC lib/env_dpdk/pci_virtio.o 00:02:18.355 CC lib/env_dpdk/pci_vmd.o 00:02:18.355 CC lib/env_dpdk/pci_idxd.o 00:02:18.355 CC lib/env_dpdk/pci_event.o 00:02:18.355 CC lib/env_dpdk/sigbus_handler.o 00:02:18.355 CC lib/env_dpdk/pci_dpdk.o 00:02:18.355 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.355 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:18.615 LIB libspdk_conf.a 00:02:18.615 SO libspdk_conf.so.6.0 00:02:18.615 LIB libspdk_rdma_utils.a 00:02:18.615 LIB libspdk_json.a 00:02:18.615 SO libspdk_rdma_utils.so.1.0 00:02:18.615 SYMLINK libspdk_conf.so 00:02:18.615 SO libspdk_json.so.6.0 00:02:18.874 SYMLINK libspdk_rdma_utils.so 00:02:18.874 SYMLINK libspdk_json.so 00:02:18.874 LIB libspdk_idxd.a 00:02:18.874 SO libspdk_idxd.so.12.1 00:02:18.874 LIB libspdk_vmd.a 00:02:18.874 SO libspdk_vmd.so.6.0 00:02:18.874 SYMLINK libspdk_idxd.so 00:02:19.132 SYMLINK libspdk_vmd.so 00:02:19.132 CC lib/rdma_provider/common.o 00:02:19.132 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:19.132 CC lib/jsonrpc/jsonrpc_server.o 00:02:19.132 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:19.132 CC lib/jsonrpc/jsonrpc_client.o 00:02:19.132 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.132 LIB libspdk_rdma_provider.a 00:02:19.392 SO libspdk_rdma_provider.so.7.0 00:02:19.392 LIB libspdk_jsonrpc.a 00:02:19.392 SO libspdk_jsonrpc.so.6.0 00:02:19.392 SYMLINK libspdk_rdma_provider.so 00:02:19.392 SYMLINK libspdk_jsonrpc.so 00:02:19.392 LIB libspdk_env_dpdk.a 00:02:19.652 SO libspdk_env_dpdk.so.15.1 00:02:19.652 SYMLINK libspdk_env_dpdk.so 00:02:19.652 CC lib/rpc/rpc.o 00:02:19.911 LIB libspdk_rpc.a 00:02:19.911 SO libspdk_rpc.so.6.0 00:02:19.911 SYMLINK libspdk_rpc.so 00:02:20.170 CC lib/notify/notify.o 00:02:20.428 CC lib/notify/notify_rpc.o 00:02:20.428 CC lib/keyring/keyring.o 00:02:20.428 CC lib/keyring/keyring_rpc.o 00:02:20.428 CC lib/trace/trace.o 00:02:20.428 CC lib/trace/trace_flags.o 00:02:20.428 CC lib/trace/trace_rpc.o 00:02:20.428 LIB libspdk_notify.a 00:02:20.428 SO libspdk_notify.so.6.0 00:02:20.428 LIB libspdk_keyring.a 00:02:20.428 LIB libspdk_trace.a 00:02:20.428 SO libspdk_keyring.so.2.0 00:02:20.428 SO libspdk_trace.so.11.0 00:02:20.428 SYMLINK libspdk_notify.so 00:02:20.688 SYMLINK libspdk_keyring.so 00:02:20.688 SYMLINK libspdk_trace.so 00:02:20.947 CC lib/thread/thread.o 00:02:20.947 CC lib/thread/iobuf.o 00:02:20.947 CC lib/sock/sock.o 00:02:20.947 CC lib/sock/sock_rpc.o 00:02:21.205 LIB libspdk_sock.a 00:02:21.205 SO libspdk_sock.so.10.0 00:02:21.465 SYMLINK libspdk_sock.so 00:02:21.725 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:21.725 CC lib/nvme/nvme_ctrlr.o 00:02:21.725 CC lib/nvme/nvme_fabric.o 00:02:21.725 CC lib/nvme/nvme_ns_cmd.o 00:02:21.725 CC lib/nvme/nvme_ns.o 00:02:21.725 CC lib/nvme/nvme_pcie_common.o 00:02:21.725 CC lib/nvme/nvme_qpair.o 00:02:21.725 CC lib/nvme/nvme_pcie.o 00:02:21.725 CC lib/nvme/nvme.o 00:02:21.725 CC lib/nvme/nvme_quirks.o 00:02:21.725 CC lib/nvme/nvme_transport.o 00:02:21.725 CC lib/nvme/nvme_discovery.o 00:02:21.725 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:21.725 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:21.725 CC lib/nvme/nvme_tcp.o 00:02:21.725 CC lib/nvme/nvme_opal.o 00:02:21.725 CC lib/nvme/nvme_io_msg.o 00:02:21.725 CC lib/nvme/nvme_poll_group.o 00:02:21.725 CC lib/nvme/nvme_zns.o 00:02:21.725 CC lib/nvme/nvme_stubs.o 00:02:21.725 CC lib/nvme/nvme_auth.o 00:02:21.725 CC lib/nvme/nvme_cuse.o 00:02:21.725 CC lib/nvme/nvme_rdma.o 00:02:21.983 LIB libspdk_thread.a 00:02:21.983 SO libspdk_thread.so.11.0 00:02:21.983 SYMLINK libspdk_thread.so 00:02:22.242 CC lib/virtio/virtio.o 00:02:22.501 CC lib/virtio/virtio_vhost_user.o 00:02:22.501 CC lib/virtio/virtio_vfio_user.o 00:02:22.501 CC lib/virtio/virtio_pci.o 00:02:22.501 CC lib/init/json_config.o 00:02:22.501 CC lib/init/subsystem.o 00:02:22.501 CC lib/init/subsystem_rpc.o 00:02:22.501 CC lib/init/rpc.o 00:02:22.501 CC lib/fsdev/fsdev.o 00:02:22.501 CC lib/fsdev/fsdev_io.o 00:02:22.501 CC lib/fsdev/fsdev_rpc.o 00:02:22.501 CC lib/blob/request.o 00:02:22.501 CC lib/blob/blobstore.o 00:02:22.501 CC lib/blob/zeroes.o 00:02:22.501 CC lib/blob/blob_bs_dev.o 00:02:22.501 CC lib/accel/accel.o 00:02:22.501 CC lib/accel/accel_rpc.o 00:02:22.501 CC lib/accel/accel_sw.o 00:02:22.501 LIB libspdk_init.a 00:02:22.760 SO libspdk_init.so.6.0 00:02:22.760 LIB libspdk_virtio.a 00:02:22.760 SYMLINK libspdk_init.so 00:02:22.760 SO libspdk_virtio.so.7.0 00:02:22.760 SYMLINK libspdk_virtio.so 00:02:23.019 LIB libspdk_fsdev.a 00:02:23.019 SO libspdk_fsdev.so.2.0 00:02:23.019 CC lib/event/app.o 00:02:23.019 CC lib/event/reactor.o 00:02:23.019 CC lib/event/log_rpc.o 00:02:23.019 CC lib/event/app_rpc.o 00:02:23.019 CC lib/event/scheduler_static.o 00:02:23.019 SYMLINK libspdk_fsdev.so 00:02:23.277 LIB libspdk_accel.a 00:02:23.278 SO libspdk_accel.so.16.0 00:02:23.278 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:23.278 SYMLINK libspdk_accel.so 00:02:23.278 LIB libspdk_event.a 00:02:23.278 SO libspdk_event.so.14.0 00:02:23.278 LIB libspdk_nvme.a 00:02:23.536 SYMLINK libspdk_event.so 00:02:23.536 SO libspdk_nvme.so.15.0 00:02:23.536 CC lib/bdev/bdev.o 00:02:23.536 CC lib/bdev/bdev_rpc.o 00:02:23.536 CC lib/bdev/bdev_zone.o 00:02:23.536 CC lib/bdev/part.o 00:02:23.536 CC lib/bdev/scsi_nvme.o 00:02:23.795 SYMLINK libspdk_nvme.so 00:02:23.795 LIB libspdk_fuse_dispatcher.a 00:02:23.795 SO libspdk_fuse_dispatcher.so.1.0 00:02:23.795 SYMLINK libspdk_fuse_dispatcher.so 00:02:24.731 LIB libspdk_blob.a 00:02:24.731 SO libspdk_blob.so.12.0 00:02:24.731 SYMLINK libspdk_blob.so 00:02:24.990 CC lib/blobfs/blobfs.o 00:02:24.990 CC lib/blobfs/tree.o 00:02:24.990 CC lib/lvol/lvol.o 00:02:25.558 LIB libspdk_bdev.a 00:02:25.558 SO libspdk_bdev.so.17.0 00:02:25.558 LIB libspdk_blobfs.a 00:02:25.558 SO libspdk_blobfs.so.11.0 00:02:25.558 SYMLINK libspdk_bdev.so 00:02:25.558 LIB libspdk_lvol.a 00:02:25.558 SYMLINK libspdk_blobfs.so 00:02:25.558 SO libspdk_lvol.so.11.0 00:02:25.818 SYMLINK libspdk_lvol.so 00:02:25.818 CC lib/nbd/nbd.o 00:02:25.818 CC lib/nbd/nbd_rpc.o 00:02:25.818 CC lib/scsi/dev.o 00:02:25.818 CC lib/scsi/lun.o 00:02:25.818 CC lib/scsi/port.o 00:02:25.818 CC lib/scsi/scsi.o 00:02:25.818 CC lib/scsi/scsi_bdev.o 00:02:25.818 CC lib/ublk/ublk.o 00:02:25.818 CC lib/ublk/ublk_rpc.o 00:02:25.818 CC lib/scsi/scsi_pr.o 00:02:25.818 CC lib/scsi/scsi_rpc.o 00:02:25.818 CC lib/scsi/task.o 00:02:25.818 CC lib/nvmf/ctrlr.o 00:02:25.818 CC lib/ftl/ftl_core.o 00:02:25.818 CC lib/nvmf/ctrlr_discovery.o 00:02:25.818 CC lib/ftl/ftl_init.o 00:02:25.818 CC lib/nvmf/ctrlr_bdev.o 00:02:25.818 CC lib/nvmf/subsystem.o 00:02:25.818 CC lib/ftl/ftl_layout.o 00:02:25.818 CC lib/nvmf/nvmf.o 00:02:25.818 CC lib/ftl/ftl_debug.o 00:02:25.818 CC lib/nvmf/nvmf_rpc.o 00:02:25.818 CC lib/ftl/ftl_io.o 00:02:25.818 CC lib/nvmf/transport.o 00:02:25.818 CC lib/nvmf/tcp.o 00:02:25.818 CC lib/ftl/ftl_sb.o 00:02:25.818 CC lib/ftl/ftl_l2p.o 00:02:25.818 CC lib/nvmf/stubs.o 00:02:25.818 CC lib/nvmf/mdns_server.o 00:02:25.818 CC lib/ftl/ftl_l2p_flat.o 00:02:25.818 CC lib/ftl/ftl_nv_cache.o 00:02:25.818 CC lib/nvmf/rdma.o 00:02:25.818 CC lib/nvmf/auth.o 00:02:25.818 CC lib/ftl/ftl_band.o 00:02:25.818 CC lib/ftl/ftl_band_ops.o 00:02:25.818 CC lib/ftl/ftl_writer.o 00:02:25.818 CC lib/ftl/ftl_rq.o 00:02:25.818 CC lib/ftl/ftl_reloc.o 00:02:25.818 CC lib/ftl/ftl_l2p_cache.o 00:02:25.818 CC lib/ftl/ftl_p2l.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt.o 00:02:25.818 CC lib/ftl/ftl_p2l_log.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:25.818 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:25.818 CC lib/ftl/utils/ftl_md.o 00:02:25.818 CC lib/ftl/utils/ftl_conf.o 00:02:25.818 CC lib/ftl/utils/ftl_bitmap.o 00:02:25.818 CC lib/ftl/utils/ftl_property.o 00:02:25.818 CC lib/ftl/utils/ftl_mempool.o 00:02:26.076 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:26.076 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:26.076 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:26.076 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:26.076 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:26.076 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:26.076 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:26.076 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:26.076 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:26.076 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:26.076 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:26.076 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:26.076 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:26.076 CC lib/ftl/base/ftl_base_bdev.o 00:02:26.076 CC lib/ftl/base/ftl_base_dev.o 00:02:26.076 CC lib/ftl/ftl_trace.o 00:02:26.642 LIB libspdk_nbd.a 00:02:26.642 SO libspdk_nbd.so.7.0 00:02:26.642 SYMLINK libspdk_nbd.so 00:02:26.642 LIB libspdk_ublk.a 00:02:26.642 SO libspdk_ublk.so.3.0 00:02:26.642 LIB libspdk_scsi.a 00:02:26.642 SO libspdk_scsi.so.9.0 00:02:26.642 SYMLINK libspdk_ublk.so 00:02:26.642 SYMLINK libspdk_scsi.so 00:02:26.900 LIB libspdk_ftl.a 00:02:27.159 CC lib/vhost/vhost_rpc.o 00:02:27.159 SO libspdk_ftl.so.9.0 00:02:27.159 CC lib/vhost/vhost.o 00:02:27.159 CC lib/iscsi/conn.o 00:02:27.159 CC lib/iscsi/init_grp.o 00:02:27.159 CC lib/iscsi/portal_grp.o 00:02:27.159 CC lib/vhost/vhost_scsi.o 00:02:27.159 CC lib/iscsi/iscsi.o 00:02:27.159 CC lib/vhost/vhost_blk.o 00:02:27.159 CC lib/iscsi/param.o 00:02:27.159 CC lib/vhost/rte_vhost_user.o 00:02:27.159 CC lib/iscsi/tgt_node.o 00:02:27.159 CC lib/iscsi/iscsi_subsystem.o 00:02:27.159 CC lib/iscsi/iscsi_rpc.o 00:02:27.159 CC lib/iscsi/task.o 00:02:27.159 SYMLINK libspdk_ftl.so 00:02:27.726 LIB libspdk_nvmf.a 00:02:27.726 SO libspdk_nvmf.so.20.0 00:02:27.726 LIB libspdk_vhost.a 00:02:27.985 SO libspdk_vhost.so.8.0 00:02:27.985 SYMLINK libspdk_nvmf.so 00:02:27.985 SYMLINK libspdk_vhost.so 00:02:27.985 LIB libspdk_iscsi.a 00:02:28.245 SO libspdk_iscsi.so.8.0 00:02:28.245 SYMLINK libspdk_iscsi.so 00:02:28.813 CC module/env_dpdk/env_dpdk_rpc.o 00:02:28.813 CC module/accel/iaa/accel_iaa.o 00:02:28.813 CC module/accel/iaa/accel_iaa_rpc.o 00:02:28.813 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:28.813 LIB libspdk_env_dpdk_rpc.a 00:02:28.813 CC module/fsdev/aio/fsdev_aio.o 00:02:28.813 CC module/accel/ioat/accel_ioat.o 00:02:28.813 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:28.813 CC module/accel/ioat/accel_ioat_rpc.o 00:02:28.814 CC module/keyring/file/keyring.o 00:02:28.814 CC module/fsdev/aio/linux_aio_mgr.o 00:02:28.814 CC module/keyring/file/keyring_rpc.o 00:02:28.814 CC module/blob/bdev/blob_bdev.o 00:02:28.814 CC module/sock/posix/posix.o 00:02:28.814 CC module/accel/error/accel_error.o 00:02:28.814 CC module/accel/error/accel_error_rpc.o 00:02:28.814 CC module/keyring/linux/keyring.o 00:02:28.814 CC module/keyring/linux/keyring_rpc.o 00:02:28.814 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:28.814 CC module/accel/dsa/accel_dsa.o 00:02:28.814 CC module/accel/dsa/accel_dsa_rpc.o 00:02:28.814 CC module/scheduler/gscheduler/gscheduler.o 00:02:28.814 SO libspdk_env_dpdk_rpc.so.6.0 00:02:29.073 SYMLINK libspdk_env_dpdk_rpc.so 00:02:29.073 LIB libspdk_keyring_file.a 00:02:29.073 LIB libspdk_scheduler_gscheduler.a 00:02:29.073 LIB libspdk_scheduler_dpdk_governor.a 00:02:29.073 LIB libspdk_keyring_linux.a 00:02:29.073 SO libspdk_keyring_file.so.2.0 00:02:29.073 SO libspdk_scheduler_gscheduler.so.4.0 00:02:29.073 LIB libspdk_accel_iaa.a 00:02:29.073 LIB libspdk_accel_ioat.a 00:02:29.073 LIB libspdk_accel_error.a 00:02:29.073 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:29.073 LIB libspdk_scheduler_dynamic.a 00:02:29.073 SO libspdk_keyring_linux.so.1.0 00:02:29.073 SO libspdk_scheduler_dynamic.so.4.0 00:02:29.073 SO libspdk_accel_error.so.2.0 00:02:29.073 SO libspdk_accel_ioat.so.6.0 00:02:29.073 SO libspdk_accel_iaa.so.3.0 00:02:29.073 SYMLINK libspdk_scheduler_gscheduler.so 00:02:29.073 SYMLINK libspdk_keyring_file.so 00:02:29.073 LIB libspdk_blob_bdev.a 00:02:29.073 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:29.073 LIB libspdk_accel_dsa.a 00:02:29.073 SYMLINK libspdk_keyring_linux.so 00:02:29.073 SO libspdk_blob_bdev.so.12.0 00:02:29.073 SYMLINK libspdk_scheduler_dynamic.so 00:02:29.332 SYMLINK libspdk_accel_error.so 00:02:29.332 SYMLINK libspdk_accel_iaa.so 00:02:29.332 SYMLINK libspdk_accel_ioat.so 00:02:29.332 SO libspdk_accel_dsa.so.5.0 00:02:29.332 SYMLINK libspdk_blob_bdev.so 00:02:29.332 SYMLINK libspdk_accel_dsa.so 00:02:29.332 LIB libspdk_fsdev_aio.a 00:02:29.590 SO libspdk_fsdev_aio.so.1.0 00:02:29.590 LIB libspdk_sock_posix.a 00:02:29.590 SO libspdk_sock_posix.so.6.0 00:02:29.590 SYMLINK libspdk_fsdev_aio.so 00:02:29.590 SYMLINK libspdk_sock_posix.so 00:02:29.590 CC module/bdev/gpt/gpt.o 00:02:29.590 CC module/bdev/gpt/vbdev_gpt.o 00:02:29.590 CC module/bdev/error/vbdev_error_rpc.o 00:02:29.590 CC module/bdev/error/vbdev_error.o 00:02:29.590 CC module/bdev/null/bdev_null.o 00:02:29.590 CC module/bdev/split/vbdev_split.o 00:02:29.590 CC module/bdev/split/vbdev_split_rpc.o 00:02:29.590 CC module/bdev/null/bdev_null_rpc.o 00:02:29.590 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.590 CC module/bdev/delay/vbdev_delay.o 00:02:29.590 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:29.590 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:29.590 CC module/bdev/ftl/bdev_ftl.o 00:02:29.590 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:29.590 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.590 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.590 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:29.590 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:29.590 CC module/bdev/passthru/vbdev_passthru.o 00:02:29.590 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:29.590 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:29.590 CC module/bdev/aio/bdev_aio.o 00:02:29.590 CC module/bdev/aio/bdev_aio_rpc.o 00:02:29.590 CC module/bdev/raid/bdev_raid.o 00:02:29.590 CC module/bdev/iscsi/bdev_iscsi.o 00:02:29.590 CC module/bdev/raid/bdev_raid_sb.o 00:02:29.590 CC module/bdev/nvme/bdev_nvme.o 00:02:29.849 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:29.849 CC module/bdev/raid/bdev_raid_rpc.o 00:02:29.849 CC module/bdev/malloc/bdev_malloc.o 00:02:29.849 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:29.849 CC module/bdev/raid/raid0.o 00:02:29.849 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:29.849 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:29.849 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:29.849 CC module/bdev/raid/raid1.o 00:02:29.849 CC module/bdev/nvme/vbdev_opal.o 00:02:29.849 CC module/bdev/nvme/nvme_rpc.o 00:02:29.849 CC module/bdev/raid/concat.o 00:02:29.849 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:29.849 CC module/bdev/nvme/bdev_mdns_client.o 00:02:29.849 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:29.849 LIB libspdk_blobfs_bdev.a 00:02:29.849 LIB libspdk_bdev_gpt.a 00:02:29.849 SO libspdk_blobfs_bdev.so.6.0 00:02:30.107 LIB libspdk_bdev_null.a 00:02:30.107 LIB libspdk_bdev_error.a 00:02:30.107 SO libspdk_bdev_gpt.so.6.0 00:02:30.107 LIB libspdk_bdev_split.a 00:02:30.107 LIB libspdk_bdev_ftl.a 00:02:30.107 SYMLINK libspdk_blobfs_bdev.so 00:02:30.107 SO libspdk_bdev_error.so.6.0 00:02:30.107 SO libspdk_bdev_null.so.6.0 00:02:30.107 SO libspdk_bdev_split.so.6.0 00:02:30.107 SO libspdk_bdev_ftl.so.6.0 00:02:30.107 SYMLINK libspdk_bdev_gpt.so 00:02:30.107 LIB libspdk_bdev_passthru.a 00:02:30.107 LIB libspdk_bdev_aio.a 00:02:30.107 SYMLINK libspdk_bdev_error.so 00:02:30.107 SYMLINK libspdk_bdev_null.so 00:02:30.107 SO libspdk_bdev_passthru.so.6.0 00:02:30.107 SO libspdk_bdev_aio.so.6.0 00:02:30.107 SYMLINK libspdk_bdev_split.so 00:02:30.107 SYMLINK libspdk_bdev_ftl.so 00:02:30.107 LIB libspdk_bdev_zone_block.a 00:02:30.107 LIB libspdk_bdev_delay.a 00:02:30.107 LIB libspdk_bdev_iscsi.a 00:02:30.107 LIB libspdk_bdev_malloc.a 00:02:30.107 SO libspdk_bdev_zone_block.so.6.0 00:02:30.107 SYMLINK libspdk_bdev_passthru.so 00:02:30.107 SO libspdk_bdev_delay.so.6.0 00:02:30.107 SO libspdk_bdev_iscsi.so.6.0 00:02:30.107 SYMLINK libspdk_bdev_aio.so 00:02:30.107 SO libspdk_bdev_malloc.so.6.0 00:02:30.107 LIB libspdk_bdev_lvol.a 00:02:30.107 SYMLINK libspdk_bdev_zone_block.so 00:02:30.107 SYMLINK libspdk_bdev_delay.so 00:02:30.107 SO libspdk_bdev_lvol.so.6.0 00:02:30.107 SYMLINK libspdk_bdev_iscsi.so 00:02:30.366 SYMLINK libspdk_bdev_malloc.so 00:02:30.366 LIB libspdk_bdev_virtio.a 00:02:30.366 SYMLINK libspdk_bdev_lvol.so 00:02:30.366 SO libspdk_bdev_virtio.so.6.0 00:02:30.366 SYMLINK libspdk_bdev_virtio.so 00:02:30.626 LIB libspdk_bdev_raid.a 00:02:30.626 SO libspdk_bdev_raid.so.6.0 00:02:30.626 SYMLINK libspdk_bdev_raid.so 00:02:31.562 LIB libspdk_bdev_nvme.a 00:02:31.562 SO libspdk_bdev_nvme.so.7.1 00:02:31.821 SYMLINK libspdk_bdev_nvme.so 00:02:32.389 CC module/event/subsystems/scheduler/scheduler.o 00:02:32.389 CC module/event/subsystems/iobuf/iobuf.o 00:02:32.389 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:32.389 CC module/event/subsystems/sock/sock.o 00:02:32.389 CC module/event/subsystems/fsdev/fsdev.o 00:02:32.389 CC module/event/subsystems/vmd/vmd.o 00:02:32.389 CC module/event/subsystems/keyring/keyring.o 00:02:32.389 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:32.389 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:32.389 LIB libspdk_event_scheduler.a 00:02:32.389 LIB libspdk_event_keyring.a 00:02:32.389 LIB libspdk_event_fsdev.a 00:02:32.648 LIB libspdk_event_vmd.a 00:02:32.648 LIB libspdk_event_vhost_blk.a 00:02:32.648 LIB libspdk_event_sock.a 00:02:32.648 LIB libspdk_event_iobuf.a 00:02:32.648 SO libspdk_event_scheduler.so.4.0 00:02:32.648 SO libspdk_event_fsdev.so.1.0 00:02:32.648 SO libspdk_event_vhost_blk.so.3.0 00:02:32.648 SO libspdk_event_keyring.so.1.0 00:02:32.648 SO libspdk_event_sock.so.5.0 00:02:32.648 SO libspdk_event_vmd.so.6.0 00:02:32.648 SO libspdk_event_iobuf.so.3.0 00:02:32.648 SYMLINK libspdk_event_scheduler.so 00:02:32.648 SYMLINK libspdk_event_fsdev.so 00:02:32.648 SYMLINK libspdk_event_vhost_blk.so 00:02:32.648 SYMLINK libspdk_event_keyring.so 00:02:32.648 SYMLINK libspdk_event_sock.so 00:02:32.648 SYMLINK libspdk_event_vmd.so 00:02:32.648 SYMLINK libspdk_event_iobuf.so 00:02:32.907 CC module/event/subsystems/accel/accel.o 00:02:33.165 LIB libspdk_event_accel.a 00:02:33.166 SO libspdk_event_accel.so.6.0 00:02:33.166 SYMLINK libspdk_event_accel.so 00:02:33.424 CC module/event/subsystems/bdev/bdev.o 00:02:33.683 LIB libspdk_event_bdev.a 00:02:33.683 SO libspdk_event_bdev.so.6.0 00:02:33.683 SYMLINK libspdk_event_bdev.so 00:02:33.941 CC module/event/subsystems/scsi/scsi.o 00:02:33.941 CC module/event/subsystems/ublk/ublk.o 00:02:33.941 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:33.941 CC module/event/subsystems/nbd/nbd.o 00:02:33.941 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:34.200 LIB libspdk_event_nbd.a 00:02:34.200 LIB libspdk_event_ublk.a 00:02:34.200 LIB libspdk_event_scsi.a 00:02:34.200 SO libspdk_event_ublk.so.3.0 00:02:34.200 SO libspdk_event_nbd.so.6.0 00:02:34.200 SO libspdk_event_scsi.so.6.0 00:02:34.200 LIB libspdk_event_nvmf.a 00:02:34.200 SYMLINK libspdk_event_nbd.so 00:02:34.200 SYMLINK libspdk_event_ublk.so 00:02:34.200 SO libspdk_event_nvmf.so.6.0 00:02:34.200 SYMLINK libspdk_event_scsi.so 00:02:34.200 SYMLINK libspdk_event_nvmf.so 00:02:34.459 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:34.459 CC module/event/subsystems/iscsi/iscsi.o 00:02:34.718 LIB libspdk_event_vhost_scsi.a 00:02:34.718 LIB libspdk_event_iscsi.a 00:02:34.718 SO libspdk_event_vhost_scsi.so.3.0 00:02:34.718 SO libspdk_event_iscsi.so.6.0 00:02:34.718 SYMLINK libspdk_event_vhost_scsi.so 00:02:34.718 SYMLINK libspdk_event_iscsi.so 00:02:34.978 SO libspdk.so.6.0 00:02:34.978 SYMLINK libspdk.so 00:02:35.237 CC app/trace_record/trace_record.o 00:02:35.237 CXX app/trace/trace.o 00:02:35.237 CC app/spdk_nvme_identify/identify.o 00:02:35.237 TEST_HEADER include/spdk/accel.h 00:02:35.237 TEST_HEADER include/spdk/accel_module.h 00:02:35.237 TEST_HEADER include/spdk/assert.h 00:02:35.237 TEST_HEADER include/spdk/barrier.h 00:02:35.237 TEST_HEADER include/spdk/base64.h 00:02:35.237 TEST_HEADER include/spdk/bdev.h 00:02:35.237 TEST_HEADER include/spdk/bdev_module.h 00:02:35.237 TEST_HEADER include/spdk/bit_array.h 00:02:35.237 TEST_HEADER include/spdk/bdev_zone.h 00:02:35.237 CC app/spdk_top/spdk_top.o 00:02:35.237 TEST_HEADER include/spdk/bit_pool.h 00:02:35.237 TEST_HEADER include/spdk/blob_bdev.h 00:02:35.237 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:35.237 CC test/rpc_client/rpc_client_test.o 00:02:35.237 CC app/spdk_nvme_perf/perf.o 00:02:35.237 TEST_HEADER include/spdk/blobfs.h 00:02:35.237 TEST_HEADER include/spdk/conf.h 00:02:35.237 TEST_HEADER include/spdk/blob.h 00:02:35.237 CC app/spdk_lspci/spdk_lspci.o 00:02:35.237 TEST_HEADER include/spdk/config.h 00:02:35.237 CC app/spdk_nvme_discover/discovery_aer.o 00:02:35.237 TEST_HEADER include/spdk/cpuset.h 00:02:35.237 TEST_HEADER include/spdk/crc16.h 00:02:35.237 TEST_HEADER include/spdk/crc32.h 00:02:35.237 TEST_HEADER include/spdk/crc64.h 00:02:35.237 TEST_HEADER include/spdk/dif.h 00:02:35.237 TEST_HEADER include/spdk/endian.h 00:02:35.500 TEST_HEADER include/spdk/dma.h 00:02:35.500 TEST_HEADER include/spdk/env_dpdk.h 00:02:35.500 TEST_HEADER include/spdk/event.h 00:02:35.500 TEST_HEADER include/spdk/fd_group.h 00:02:35.500 TEST_HEADER include/spdk/env.h 00:02:35.500 TEST_HEADER include/spdk/file.h 00:02:35.500 TEST_HEADER include/spdk/fd.h 00:02:35.500 TEST_HEADER include/spdk/fsdev.h 00:02:35.500 TEST_HEADER include/spdk/fsdev_module.h 00:02:35.500 TEST_HEADER include/spdk/ftl.h 00:02:35.500 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:35.500 TEST_HEADER include/spdk/gpt_spec.h 00:02:35.500 TEST_HEADER include/spdk/hexlify.h 00:02:35.500 TEST_HEADER include/spdk/histogram_data.h 00:02:35.500 TEST_HEADER include/spdk/idxd.h 00:02:35.500 TEST_HEADER include/spdk/idxd_spec.h 00:02:35.500 TEST_HEADER include/spdk/init.h 00:02:35.500 TEST_HEADER include/spdk/ioat_spec.h 00:02:35.500 TEST_HEADER include/spdk/ioat.h 00:02:35.500 TEST_HEADER include/spdk/iscsi_spec.h 00:02:35.500 TEST_HEADER include/spdk/jsonrpc.h 00:02:35.500 TEST_HEADER include/spdk/json.h 00:02:35.500 TEST_HEADER include/spdk/keyring_module.h 00:02:35.501 TEST_HEADER include/spdk/keyring.h 00:02:35.501 TEST_HEADER include/spdk/log.h 00:02:35.501 TEST_HEADER include/spdk/likely.h 00:02:35.501 TEST_HEADER include/spdk/memory.h 00:02:35.501 TEST_HEADER include/spdk/lvol.h 00:02:35.501 TEST_HEADER include/spdk/md5.h 00:02:35.501 TEST_HEADER include/spdk/mmio.h 00:02:35.501 TEST_HEADER include/spdk/nbd.h 00:02:35.501 TEST_HEADER include/spdk/net.h 00:02:35.501 TEST_HEADER include/spdk/nvme.h 00:02:35.501 TEST_HEADER include/spdk/notify.h 00:02:35.501 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:35.501 TEST_HEADER include/spdk/nvme_intel.h 00:02:35.501 TEST_HEADER include/spdk/nvme_spec.h 00:02:35.501 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:35.501 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:35.501 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:35.501 TEST_HEADER include/spdk/nvme_zns.h 00:02:35.501 TEST_HEADER include/spdk/nvmf.h 00:02:35.501 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:35.501 TEST_HEADER include/spdk/nvmf_spec.h 00:02:35.501 TEST_HEADER include/spdk/nvmf_transport.h 00:02:35.501 TEST_HEADER include/spdk/opal.h 00:02:35.501 CC app/iscsi_tgt/iscsi_tgt.o 00:02:35.501 TEST_HEADER include/spdk/pipe.h 00:02:35.501 TEST_HEADER include/spdk/opal_spec.h 00:02:35.501 CC app/spdk_dd/spdk_dd.o 00:02:35.501 TEST_HEADER include/spdk/pci_ids.h 00:02:35.501 TEST_HEADER include/spdk/queue.h 00:02:35.501 TEST_HEADER include/spdk/reduce.h 00:02:35.501 TEST_HEADER include/spdk/rpc.h 00:02:35.501 TEST_HEADER include/spdk/scheduler.h 00:02:35.501 CC app/nvmf_tgt/nvmf_main.o 00:02:35.501 TEST_HEADER include/spdk/scsi_spec.h 00:02:35.501 TEST_HEADER include/spdk/scsi.h 00:02:35.501 TEST_HEADER include/spdk/stdinc.h 00:02:35.501 TEST_HEADER include/spdk/string.h 00:02:35.501 TEST_HEADER include/spdk/sock.h 00:02:35.501 TEST_HEADER include/spdk/thread.h 00:02:35.501 TEST_HEADER include/spdk/trace_parser.h 00:02:35.501 TEST_HEADER include/spdk/trace.h 00:02:35.501 TEST_HEADER include/spdk/tree.h 00:02:35.501 TEST_HEADER include/spdk/ublk.h 00:02:35.501 TEST_HEADER include/spdk/util.h 00:02:35.501 TEST_HEADER include/spdk/version.h 00:02:35.501 TEST_HEADER include/spdk/uuid.h 00:02:35.501 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:35.501 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:35.501 TEST_HEADER include/spdk/vhost.h 00:02:35.501 TEST_HEADER include/spdk/xor.h 00:02:35.501 TEST_HEADER include/spdk/vmd.h 00:02:35.501 CXX test/cpp_headers/accel.o 00:02:35.501 TEST_HEADER include/spdk/zipf.h 00:02:35.501 CXX test/cpp_headers/assert.o 00:02:35.501 CXX test/cpp_headers/barrier.o 00:02:35.501 CXX test/cpp_headers/accel_module.o 00:02:35.501 CXX test/cpp_headers/base64.o 00:02:35.501 CXX test/cpp_headers/bdev.o 00:02:35.501 CXX test/cpp_headers/bdev_module.o 00:02:35.501 CXX test/cpp_headers/bit_array.o 00:02:35.501 CXX test/cpp_headers/bdev_zone.o 00:02:35.501 CXX test/cpp_headers/blob_bdev.o 00:02:35.501 CXX test/cpp_headers/blobfs_bdev.o 00:02:35.501 CXX test/cpp_headers/blobfs.o 00:02:35.501 CXX test/cpp_headers/blob.o 00:02:35.501 CXX test/cpp_headers/bit_pool.o 00:02:35.501 CXX test/cpp_headers/conf.o 00:02:35.501 CXX test/cpp_headers/config.o 00:02:35.501 CC app/spdk_tgt/spdk_tgt.o 00:02:35.501 CXX test/cpp_headers/crc16.o 00:02:35.501 CXX test/cpp_headers/crc64.o 00:02:35.501 CXX test/cpp_headers/crc32.o 00:02:35.501 CXX test/cpp_headers/dma.o 00:02:35.501 CXX test/cpp_headers/cpuset.o 00:02:35.501 CXX test/cpp_headers/env_dpdk.o 00:02:35.501 CXX test/cpp_headers/env.o 00:02:35.501 CXX test/cpp_headers/dif.o 00:02:35.501 CXX test/cpp_headers/event.o 00:02:35.501 CXX test/cpp_headers/endian.o 00:02:35.501 CXX test/cpp_headers/fd.o 00:02:35.501 CXX test/cpp_headers/fsdev.o 00:02:35.501 CXX test/cpp_headers/file.o 00:02:35.501 CXX test/cpp_headers/fd_group.o 00:02:35.501 CXX test/cpp_headers/gpt_spec.o 00:02:35.501 CXX test/cpp_headers/fsdev_module.o 00:02:35.501 CXX test/cpp_headers/fuse_dispatcher.o 00:02:35.501 CXX test/cpp_headers/ftl.o 00:02:35.501 CXX test/cpp_headers/histogram_data.o 00:02:35.501 CXX test/cpp_headers/hexlify.o 00:02:35.501 CXX test/cpp_headers/idxd_spec.o 00:02:35.501 CXX test/cpp_headers/idxd.o 00:02:35.501 CXX test/cpp_headers/init.o 00:02:35.501 CXX test/cpp_headers/ioat.o 00:02:35.501 CXX test/cpp_headers/json.o 00:02:35.501 CXX test/cpp_headers/jsonrpc.o 00:02:35.501 CXX test/cpp_headers/iscsi_spec.o 00:02:35.501 CXX test/cpp_headers/ioat_spec.o 00:02:35.501 CXX test/cpp_headers/keyring_module.o 00:02:35.501 CXX test/cpp_headers/keyring.o 00:02:35.501 CXX test/cpp_headers/likely.o 00:02:35.501 CXX test/cpp_headers/log.o 00:02:35.501 CXX test/cpp_headers/md5.o 00:02:35.501 CXX test/cpp_headers/lvol.o 00:02:35.501 CXX test/cpp_headers/mmio.o 00:02:35.501 CXX test/cpp_headers/memory.o 00:02:35.501 CXX test/cpp_headers/nbd.o 00:02:35.501 CXX test/cpp_headers/net.o 00:02:35.501 CXX test/cpp_headers/notify.o 00:02:35.501 CXX test/cpp_headers/nvme_intel.o 00:02:35.501 CXX test/cpp_headers/nvme.o 00:02:35.501 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:35.501 CXX test/cpp_headers/nvme_ocssd.o 00:02:35.501 CXX test/cpp_headers/nvme_spec.o 00:02:35.501 CXX test/cpp_headers/nvme_zns.o 00:02:35.501 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:35.501 CXX test/cpp_headers/nvmf.o 00:02:35.501 CXX test/cpp_headers/nvmf_cmd.o 00:02:35.501 CXX test/cpp_headers/nvmf_spec.o 00:02:35.501 CXX test/cpp_headers/nvmf_transport.o 00:02:35.501 CXX test/cpp_headers/opal.o 00:02:35.501 CXX test/cpp_headers/opal_spec.o 00:02:35.501 CC examples/ioat/verify/verify.o 00:02:35.501 CC test/app/jsoncat/jsoncat.o 00:02:35.501 CC test/app/histogram_perf/histogram_perf.o 00:02:35.501 CC test/app/stub/stub.o 00:02:35.501 CC test/env/vtophys/vtophys.o 00:02:35.501 CC test/env/pci/pci_ut.o 00:02:35.501 CC examples/ioat/perf/perf.o 00:02:35.501 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:35.501 CC examples/util/zipf/zipf.o 00:02:35.501 CC test/thread/poller_perf/poller_perf.o 00:02:35.771 CC test/env/memory/memory_ut.o 00:02:35.771 CC app/fio/bdev/fio_plugin.o 00:02:35.771 CC app/fio/nvme/fio_plugin.o 00:02:35.771 CC test/dma/test_dma/test_dma.o 00:02:35.771 CC test/app/bdev_svc/bdev_svc.o 00:02:36.038 LINK spdk_nvme_discover 00:02:36.038 LINK spdk_lspci 00:02:36.038 CC test/env/mem_callbacks/mem_callbacks.o 00:02:36.038 LINK rpc_client_test 00:02:36.038 LINK iscsi_tgt 00:02:36.038 LINK jsoncat 00:02:36.038 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:36.038 LINK histogram_perf 00:02:36.038 LINK interrupt_tgt 00:02:36.038 LINK vtophys 00:02:36.038 LINK nvmf_tgt 00:02:36.038 CXX test/cpp_headers/pci_ids.o 00:02:36.038 CXX test/cpp_headers/pipe.o 00:02:36.038 CXX test/cpp_headers/queue.o 00:02:36.038 CXX test/cpp_headers/reduce.o 00:02:36.038 CXX test/cpp_headers/rpc.o 00:02:36.038 LINK stub 00:02:36.038 LINK env_dpdk_post_init 00:02:36.038 CXX test/cpp_headers/scheduler.o 00:02:36.038 CXX test/cpp_headers/scsi.o 00:02:36.038 CXX test/cpp_headers/scsi_spec.o 00:02:36.038 CXX test/cpp_headers/sock.o 00:02:36.038 CXX test/cpp_headers/stdinc.o 00:02:36.038 LINK spdk_tgt 00:02:36.038 CXX test/cpp_headers/string.o 00:02:36.038 LINK spdk_trace_record 00:02:36.038 CXX test/cpp_headers/thread.o 00:02:36.038 CXX test/cpp_headers/trace.o 00:02:36.038 CXX test/cpp_headers/trace_parser.o 00:02:36.038 CXX test/cpp_headers/tree.o 00:02:36.038 CXX test/cpp_headers/ublk.o 00:02:36.038 CXX test/cpp_headers/util.o 00:02:36.038 CXX test/cpp_headers/uuid.o 00:02:36.038 CXX test/cpp_headers/version.o 00:02:36.038 CXX test/cpp_headers/vfio_user_pci.o 00:02:36.038 CXX test/cpp_headers/vfio_user_spec.o 00:02:36.038 CXX test/cpp_headers/vhost.o 00:02:36.038 CXX test/cpp_headers/vmd.o 00:02:36.038 CXX test/cpp_headers/zipf.o 00:02:36.038 CXX test/cpp_headers/xor.o 00:02:36.296 LINK verify 00:02:36.296 LINK poller_perf 00:02:36.296 LINK zipf 00:02:36.296 LINK spdk_dd 00:02:36.296 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:36.296 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:36.296 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:36.296 LINK ioat_perf 00:02:36.296 LINK bdev_svc 00:02:36.296 LINK spdk_trace 00:02:36.555 LINK pci_ut 00:02:36.556 LINK spdk_bdev 00:02:36.556 LINK spdk_nvme 00:02:36.556 LINK nvme_fuzz 00:02:36.556 LINK test_dma 00:02:36.556 CC examples/vmd/lsvmd/lsvmd.o 00:02:36.556 CC examples/idxd/perf/perf.o 00:02:36.556 CC examples/vmd/led/led.o 00:02:36.556 CC examples/sock/hello_world/hello_sock.o 00:02:36.556 LINK spdk_nvme_perf 00:02:36.556 CC test/event/reactor/reactor.o 00:02:36.814 CC test/event/event_perf/event_perf.o 00:02:36.814 LINK vhost_fuzz 00:02:36.814 CC examples/thread/thread/thread_ex.o 00:02:36.814 CC test/event/reactor_perf/reactor_perf.o 00:02:36.814 LINK mem_callbacks 00:02:36.814 CC test/event/app_repeat/app_repeat.o 00:02:36.814 CC test/event/scheduler/scheduler.o 00:02:36.814 LINK spdk_top 00:02:36.814 LINK spdk_nvme_identify 00:02:36.814 CC app/vhost/vhost.o 00:02:36.814 LINK led 00:02:36.814 LINK lsvmd 00:02:36.814 LINK reactor 00:02:36.814 LINK event_perf 00:02:36.814 LINK reactor_perf 00:02:36.814 LINK app_repeat 00:02:36.814 LINK hello_sock 00:02:37.071 LINK thread 00:02:37.071 LINK idxd_perf 00:02:37.071 LINK scheduler 00:02:37.071 LINK vhost 00:02:37.071 CC test/nvme/overhead/overhead.o 00:02:37.071 CC test/nvme/connect_stress/connect_stress.o 00:02:37.071 CC test/nvme/reserve/reserve.o 00:02:37.071 CC test/nvme/fused_ordering/fused_ordering.o 00:02:37.071 CC test/nvme/aer/aer.o 00:02:37.071 CC test/nvme/reset/reset.o 00:02:37.071 CC test/nvme/e2edp/nvme_dp.o 00:02:37.071 CC test/nvme/fdp/fdp.o 00:02:37.071 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:37.071 CC test/nvme/boot_partition/boot_partition.o 00:02:37.071 CC test/nvme/sgl/sgl.o 00:02:37.071 CC test/nvme/startup/startup.o 00:02:37.071 CC test/nvme/compliance/nvme_compliance.o 00:02:37.071 CC test/nvme/err_injection/err_injection.o 00:02:37.071 CC test/nvme/simple_copy/simple_copy.o 00:02:37.071 CC test/nvme/cuse/cuse.o 00:02:37.071 LINK memory_ut 00:02:37.071 CC test/blobfs/mkfs/mkfs.o 00:02:37.071 CC test/accel/dif/dif.o 00:02:37.329 CC test/lvol/esnap/esnap.o 00:02:37.329 LINK connect_stress 00:02:37.329 LINK doorbell_aers 00:02:37.329 LINK startup 00:02:37.329 LINK reserve 00:02:37.329 LINK boot_partition 00:02:37.329 LINK fused_ordering 00:02:37.329 LINK err_injection 00:02:37.329 CC examples/nvme/arbitration/arbitration.o 00:02:37.329 CC examples/nvme/hello_world/hello_world.o 00:02:37.329 CC examples/nvme/abort/abort.o 00:02:37.329 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:37.329 CC examples/nvme/hotplug/hotplug.o 00:02:37.329 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:37.329 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:37.329 CC examples/nvme/reconnect/reconnect.o 00:02:37.329 LINK simple_copy 00:02:37.329 LINK nvme_dp 00:02:37.329 LINK reset 00:02:37.329 LINK mkfs 00:02:37.329 LINK sgl 00:02:37.329 LINK aer 00:02:37.329 LINK overhead 00:02:37.329 LINK nvme_compliance 00:02:37.329 CC examples/accel/perf/accel_perf.o 00:02:37.329 LINK fdp 00:02:37.587 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:37.587 CC examples/blob/hello_world/hello_blob.o 00:02:37.587 CC examples/blob/cli/blobcli.o 00:02:37.587 LINK pmr_persistence 00:02:37.587 LINK cmb_copy 00:02:37.587 LINK hello_world 00:02:37.587 LINK hotplug 00:02:37.587 LINK arbitration 00:02:37.587 LINK abort 00:02:37.587 LINK reconnect 00:02:37.587 LINK iscsi_fuzz 00:02:37.846 LINK hello_blob 00:02:37.846 LINK hello_fsdev 00:02:37.846 LINK dif 00:02:37.846 LINK nvme_manage 00:02:37.846 LINK accel_perf 00:02:37.846 LINK blobcli 00:02:38.105 LINK cuse 00:02:38.363 CC test/bdev/bdevio/bdevio.o 00:02:38.363 CC examples/bdev/hello_world/hello_bdev.o 00:02:38.363 CC examples/bdev/bdevperf/bdevperf.o 00:02:38.622 LINK hello_bdev 00:02:38.622 LINK bdevio 00:02:38.881 LINK bdevperf 00:02:39.449 CC examples/nvmf/nvmf/nvmf.o 00:02:39.708 LINK nvmf 00:02:41.087 LINK esnap 00:02:41.087 00:02:41.087 real 0m55.336s 00:02:41.087 user 8m6.080s 00:02:41.087 sys 3m37.538s 00:02:41.087 11:41:48 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:41.087 11:41:48 make -- common/autotest_common.sh@10 -- $ set +x 00:02:41.087 ************************************ 00:02:41.087 END TEST make 00:02:41.087 ************************************ 00:02:41.087 11:41:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:41.087 11:41:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:41.087 11:41:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:41.087 11:41:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.087 11:41:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:41.087 11:41:49 -- pm/common@44 -- $ pid=2964899 00:02:41.087 11:41:49 -- pm/common@50 -- $ kill -TERM 2964899 00:02:41.087 11:41:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.087 11:41:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:41.087 11:41:49 -- pm/common@44 -- $ pid=2964900 00:02:41.087 11:41:49 -- pm/common@50 -- $ kill -TERM 2964900 00:02:41.087 11:41:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.087 11:41:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:41.087 11:41:49 -- pm/common@44 -- $ pid=2964903 00:02:41.087 11:41:49 -- pm/common@50 -- $ kill -TERM 2964903 00:02:41.087 11:41:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.087 11:41:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:41.087 11:41:49 -- pm/common@44 -- $ pid=2964927 00:02:41.087 11:41:49 -- pm/common@50 -- $ sudo -E kill -TERM 2964927 00:02:41.087 11:41:49 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:41.087 11:41:49 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:41.347 11:41:49 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:41.347 11:41:49 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:41.347 11:41:49 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:41.347 11:41:49 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:41.347 11:41:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:41.347 11:41:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:41.347 11:41:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:41.347 11:41:49 -- scripts/common.sh@336 -- # IFS=.-: 00:02:41.347 11:41:49 -- scripts/common.sh@336 -- # read -ra ver1 00:02:41.347 11:41:49 -- scripts/common.sh@337 -- # IFS=.-: 00:02:41.347 11:41:49 -- scripts/common.sh@337 -- # read -ra ver2 00:02:41.347 11:41:49 -- scripts/common.sh@338 -- # local 'op=<' 00:02:41.347 11:41:49 -- scripts/common.sh@340 -- # ver1_l=2 00:02:41.347 11:41:49 -- scripts/common.sh@341 -- # ver2_l=1 00:02:41.347 11:41:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:41.347 11:41:49 -- scripts/common.sh@344 -- # case "$op" in 00:02:41.347 11:41:49 -- scripts/common.sh@345 -- # : 1 00:02:41.347 11:41:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:41.347 11:41:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:41.347 11:41:49 -- scripts/common.sh@365 -- # decimal 1 00:02:41.347 11:41:49 -- scripts/common.sh@353 -- # local d=1 00:02:41.347 11:41:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:41.347 11:41:49 -- scripts/common.sh@355 -- # echo 1 00:02:41.347 11:41:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:41.347 11:41:49 -- scripts/common.sh@366 -- # decimal 2 00:02:41.347 11:41:49 -- scripts/common.sh@353 -- # local d=2 00:02:41.347 11:41:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:41.347 11:41:49 -- scripts/common.sh@355 -- # echo 2 00:02:41.347 11:41:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:41.347 11:41:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:41.347 11:41:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:41.347 11:41:49 -- scripts/common.sh@368 -- # return 0 00:02:41.347 11:41:49 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:41.347 11:41:49 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:41.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.347 --rc genhtml_branch_coverage=1 00:02:41.347 --rc genhtml_function_coverage=1 00:02:41.347 --rc genhtml_legend=1 00:02:41.347 --rc geninfo_all_blocks=1 00:02:41.347 --rc geninfo_unexecuted_blocks=1 00:02:41.347 00:02:41.347 ' 00:02:41.347 11:41:49 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:41.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.347 --rc genhtml_branch_coverage=1 00:02:41.347 --rc genhtml_function_coverage=1 00:02:41.347 --rc genhtml_legend=1 00:02:41.347 --rc geninfo_all_blocks=1 00:02:41.347 --rc geninfo_unexecuted_blocks=1 00:02:41.347 00:02:41.347 ' 00:02:41.347 11:41:49 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:41.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.347 --rc genhtml_branch_coverage=1 00:02:41.347 --rc genhtml_function_coverage=1 00:02:41.347 --rc genhtml_legend=1 00:02:41.347 --rc geninfo_all_blocks=1 00:02:41.347 --rc geninfo_unexecuted_blocks=1 00:02:41.347 00:02:41.347 ' 00:02:41.347 11:41:49 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:41.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.347 --rc genhtml_branch_coverage=1 00:02:41.347 --rc genhtml_function_coverage=1 00:02:41.347 --rc genhtml_legend=1 00:02:41.347 --rc geninfo_all_blocks=1 00:02:41.347 --rc geninfo_unexecuted_blocks=1 00:02:41.347 00:02:41.347 ' 00:02:41.347 11:41:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:41.347 11:41:49 -- nvmf/common.sh@7 -- # uname -s 00:02:41.347 11:41:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:41.347 11:41:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:41.347 11:41:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:41.347 11:41:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:41.347 11:41:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:41.347 11:41:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:41.347 11:41:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:41.347 11:41:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:41.347 11:41:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:41.347 11:41:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:41.347 11:41:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:02:41.347 11:41:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:02:41.347 11:41:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:41.347 11:41:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:41.347 11:41:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:41.347 11:41:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:41.347 11:41:49 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:41.347 11:41:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:41.347 11:41:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:41.347 11:41:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:41.347 11:41:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:41.347 11:41:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.347 11:41:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.348 11:41:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.348 11:41:49 -- paths/export.sh@5 -- # export PATH 00:02:41.348 11:41:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.348 11:41:49 -- nvmf/common.sh@51 -- # : 0 00:02:41.348 11:41:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:41.348 11:41:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:41.348 11:41:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:41.348 11:41:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:41.348 11:41:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:41.348 11:41:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:41.348 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:41.348 11:41:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:41.348 11:41:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:41.348 11:41:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:41.348 11:41:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:41.348 11:41:49 -- spdk/autotest.sh@32 -- # uname -s 00:02:41.348 11:41:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:41.348 11:41:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:41.348 11:41:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:41.348 11:41:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:41.348 11:41:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:41.348 11:41:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:41.348 11:41:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:41.348 11:41:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:41.348 11:41:49 -- spdk/autotest.sh@48 -- # udevadm_pid=3026646 00:02:41.348 11:41:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:41.348 11:41:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:41.348 11:41:49 -- pm/common@17 -- # local monitor 00:02:41.348 11:41:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.348 11:41:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.348 11:41:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.348 11:41:49 -- pm/common@21 -- # date +%s 00:02:41.348 11:41:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.348 11:41:49 -- pm/common@21 -- # date +%s 00:02:41.348 11:41:49 -- pm/common@25 -- # sleep 1 00:02:41.348 11:41:49 -- pm/common@21 -- # date +%s 00:02:41.348 11:41:49 -- pm/common@21 -- # date +%s 00:02:41.348 11:41:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733740909 00:02:41.348 11:41:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733740909 00:02:41.348 11:41:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733740909 00:02:41.348 11:41:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733740909 00:02:41.348 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733740909_collect-cpu-load.pm.log 00:02:41.348 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733740909_collect-vmstat.pm.log 00:02:41.348 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733740909_collect-cpu-temp.pm.log 00:02:41.348 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733740909_collect-bmc-pm.bmc.pm.log 00:02:42.288 11:41:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:42.289 11:41:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:42.289 11:41:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:42.289 11:41:50 -- common/autotest_common.sh@10 -- # set +x 00:02:42.289 11:41:50 -- spdk/autotest.sh@59 -- # create_test_list 00:02:42.289 11:41:50 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:42.289 11:41:50 -- common/autotest_common.sh@10 -- # set +x 00:02:42.289 11:41:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:42.547 11:41:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:42.547 11:41:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:42.547 11:41:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:42.547 11:41:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:42.547 11:41:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:42.547 11:41:50 -- common/autotest_common.sh@1457 -- # uname 00:02:42.547 11:41:50 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:42.547 11:41:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:42.547 11:41:50 -- common/autotest_common.sh@1477 -- # uname 00:02:42.547 11:41:50 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:42.547 11:41:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:42.547 11:41:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:42.547 lcov: LCOV version 1.15 00:02:42.547 11:41:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:54.759 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:54.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:06.975 11:42:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:06.975 11:42:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:06.975 11:42:14 -- common/autotest_common.sh@10 -- # set +x 00:03:06.975 11:42:14 -- spdk/autotest.sh@78 -- # rm -f 00:03:06.975 11:42:14 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.522 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:09.522 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:09.522 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:09.522 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:09.522 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:09.522 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:09.522 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:09.522 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:09.522 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:09.523 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:09.523 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:09.523 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:09.523 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:09.523 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:09.523 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:09.523 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:09.523 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:09.523 11:42:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:09.523 11:42:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:09.523 11:42:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:09.523 11:42:17 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:09.523 11:42:17 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:09.523 11:42:17 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:09.523 11:42:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:09.523 11:42:17 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:09.523 11:42:17 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:09.523 11:42:17 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:09.523 11:42:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:09.523 11:42:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.523 11:42:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:09.523 11:42:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:09.523 11:42:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:09.523 11:42:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:09.523 11:42:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:09.523 11:42:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:09.523 11:42:17 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:09.523 No valid GPT data, bailing 00:03:09.523 11:42:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:09.523 11:42:17 -- scripts/common.sh@394 -- # pt= 00:03:09.523 11:42:17 -- scripts/common.sh@395 -- # return 1 00:03:09.523 11:42:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:09.523 1+0 records in 00:03:09.523 1+0 records out 00:03:09.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463645 s, 226 MB/s 00:03:09.523 11:42:17 -- spdk/autotest.sh@105 -- # sync 00:03:09.523 11:42:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:09.523 11:42:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:09.523 11:42:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.101 11:42:22 -- spdk/autotest.sh@111 -- # uname -s 00:03:16.101 11:42:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:16.101 11:42:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:16.101 11:42:22 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:18.010 Hugepages 00:03:18.010 node hugesize free / total 00:03:18.010 node0 1048576kB 0 / 0 00:03:18.010 node0 2048kB 0 / 0 00:03:18.010 node1 1048576kB 0 / 0 00:03:18.010 node1 2048kB 0 / 0 00:03:18.010 00:03:18.010 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:18.010 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:18.010 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:18.010 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:18.010 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:18.010 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:18.010 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:18.010 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:18.010 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:18.010 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:18.010 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:18.010 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:18.010 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:18.010 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:18.010 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:18.010 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:18.010 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:18.010 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:18.010 11:42:25 -- spdk/autotest.sh@117 -- # uname -s 00:03:18.010 11:42:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:18.010 11:42:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:18.010 11:42:25 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:21.306 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:21.306 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.245 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.505 11:42:30 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:23.446 11:42:31 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:23.446 11:42:31 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:23.446 11:42:31 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:23.446 11:42:31 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:23.446 11:42:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:23.446 11:42:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:23.446 11:42:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:23.446 11:42:31 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:23.446 11:42:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:23.446 11:42:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:23.446 11:42:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:23.446 11:42:31 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.742 Waiting for block devices as requested 00:03:26.742 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:26.742 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:26.742 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:26.742 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:26.742 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:26.742 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:26.742 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:26.742 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:27.001 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:27.001 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:27.001 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:27.262 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:27.262 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:27.262 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:27.521 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:27.521 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:27.521 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:27.521 11:42:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:27.521 11:42:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:27.521 11:42:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:27.521 11:42:35 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:27.521 11:42:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:27.521 11:42:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:27.521 11:42:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:27.521 11:42:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:27.521 11:42:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:27.521 11:42:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:27.781 11:42:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:27.781 11:42:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:27.781 11:42:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:27.781 11:42:35 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:27.781 11:42:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:27.781 11:42:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:27.781 11:42:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:27.781 11:42:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:27.781 11:42:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:27.781 11:42:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:27.781 11:42:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:27.781 11:42:35 -- common/autotest_common.sh@1543 -- # continue 00:03:27.781 11:42:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:27.781 11:42:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:27.781 11:42:35 -- common/autotest_common.sh@10 -- # set +x 00:03:27.781 11:42:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:27.781 11:42:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:27.781 11:42:35 -- common/autotest_common.sh@10 -- # set +x 00:03:27.781 11:42:35 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:31.077 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:31.077 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.016 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:32.016 11:42:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:32.016 11:42:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:32.016 11:42:40 -- common/autotest_common.sh@10 -- # set +x 00:03:32.275 11:42:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:32.275 11:42:40 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:32.275 11:42:40 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:32.275 11:42:40 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:32.275 11:42:40 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:32.275 11:42:40 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:32.275 11:42:40 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:32.275 11:42:40 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:32.275 11:42:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:32.275 11:42:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:32.275 11:42:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.275 11:42:40 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.275 11:42:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:32.275 11:42:40 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:32.275 11:42:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:32.275 11:42:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:32.275 11:42:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:32.275 11:42:40 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:32.275 11:42:40 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:32.275 11:42:40 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:32.275 11:42:40 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:32.275 11:42:40 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:32.275 11:42:40 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:32.275 11:42:40 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3041365 00:03:32.275 11:42:40 -- common/autotest_common.sh@1585 -- # waitforlisten 3041365 00:03:32.275 11:42:40 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.275 11:42:40 -- common/autotest_common.sh@835 -- # '[' -z 3041365 ']' 00:03:32.275 11:42:40 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.275 11:42:40 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:32.275 11:42:40 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.275 11:42:40 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:32.275 11:42:40 -- common/autotest_common.sh@10 -- # set +x 00:03:32.275 [2024-12-09 11:42:40.210899] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:03:32.275 [2024-12-09 11:42:40.210951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041365 ] 00:03:32.275 [2024-12-09 11:42:40.287944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.535 [2024-12-09 11:42:40.328906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.535 11:42:40 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:32.535 11:42:40 -- common/autotest_common.sh@868 -- # return 0 00:03:32.535 11:42:40 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:32.535 11:42:40 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:32.535 11:42:40 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:35.833 nvme0n1 00:03:35.833 11:42:43 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:35.833 [2024-12-09 11:42:43.730350] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:35.833 request: 00:03:35.833 { 00:03:35.833 "nvme_ctrlr_name": "nvme0", 00:03:35.833 "password": "test", 00:03:35.833 "method": "bdev_nvme_opal_revert", 00:03:35.833 "req_id": 1 00:03:35.833 } 00:03:35.833 Got JSON-RPC error response 00:03:35.833 response: 00:03:35.833 { 00:03:35.833 "code": -32602, 00:03:35.833 "message": "Invalid parameters" 00:03:35.833 } 00:03:35.833 11:42:43 -- common/autotest_common.sh@1591 -- # true 00:03:35.833 11:42:43 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:35.833 11:42:43 -- common/autotest_common.sh@1595 -- # killprocess 3041365 00:03:35.833 11:42:43 -- common/autotest_common.sh@954 -- # '[' -z 3041365 ']' 00:03:35.833 11:42:43 -- common/autotest_common.sh@958 -- # kill -0 3041365 00:03:35.833 11:42:43 -- common/autotest_common.sh@959 -- # uname 00:03:35.833 11:42:43 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.833 11:42:43 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3041365 00:03:35.833 11:42:43 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.833 11:42:43 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.833 11:42:43 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3041365' 00:03:35.833 killing process with pid 3041365 00:03:35.833 11:42:43 -- common/autotest_common.sh@973 -- # kill 3041365 00:03:35.833 11:42:43 -- common/autotest_common.sh@978 -- # wait 3041365 00:03:38.376 11:42:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:38.376 11:42:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:38.376 11:42:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:38.376 11:42:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:38.376 11:42:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:38.376 11:42:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.376 11:42:45 -- common/autotest_common.sh@10 -- # set +x 00:03:38.376 11:42:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:38.376 11:42:45 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:38.376 11:42:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.376 11:42:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.376 11:42:45 -- common/autotest_common.sh@10 -- # set +x 00:03:38.376 ************************************ 00:03:38.376 START TEST env 00:03:38.376 ************************************ 00:03:38.376 11:42:45 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:38.376 * Looking for test storage... 00:03:38.376 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:38.376 11:42:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:38.376 11:42:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:38.376 11:42:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:38.376 11:42:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:38.376 11:42:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:38.376 11:42:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:38.376 11:42:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:38.376 11:42:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:38.376 11:42:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:38.376 11:42:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:38.376 11:42:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:38.376 11:42:46 env -- scripts/common.sh@344 -- # case "$op" in 00:03:38.376 11:42:46 env -- scripts/common.sh@345 -- # : 1 00:03:38.376 11:42:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:38.376 11:42:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:38.376 11:42:46 env -- scripts/common.sh@365 -- # decimal 1 00:03:38.376 11:42:46 env -- scripts/common.sh@353 -- # local d=1 00:03:38.376 11:42:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:38.376 11:42:46 env -- scripts/common.sh@355 -- # echo 1 00:03:38.376 11:42:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:38.376 11:42:46 env -- scripts/common.sh@366 -- # decimal 2 00:03:38.376 11:42:46 env -- scripts/common.sh@353 -- # local d=2 00:03:38.376 11:42:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:38.376 11:42:46 env -- scripts/common.sh@355 -- # echo 2 00:03:38.376 11:42:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:38.376 11:42:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:38.376 11:42:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:38.376 11:42:46 env -- scripts/common.sh@368 -- # return 0 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:38.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.376 --rc genhtml_branch_coverage=1 00:03:38.376 --rc genhtml_function_coverage=1 00:03:38.376 --rc genhtml_legend=1 00:03:38.376 --rc geninfo_all_blocks=1 00:03:38.376 --rc geninfo_unexecuted_blocks=1 00:03:38.376 00:03:38.376 ' 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:38.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.376 --rc genhtml_branch_coverage=1 00:03:38.376 --rc genhtml_function_coverage=1 00:03:38.376 --rc genhtml_legend=1 00:03:38.376 --rc geninfo_all_blocks=1 00:03:38.376 --rc geninfo_unexecuted_blocks=1 00:03:38.376 00:03:38.376 ' 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:38.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.376 --rc genhtml_branch_coverage=1 00:03:38.376 --rc genhtml_function_coverage=1 00:03:38.376 --rc genhtml_legend=1 00:03:38.376 --rc geninfo_all_blocks=1 00:03:38.376 --rc geninfo_unexecuted_blocks=1 00:03:38.376 00:03:38.376 ' 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:38.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.376 --rc genhtml_branch_coverage=1 00:03:38.376 --rc genhtml_function_coverage=1 00:03:38.376 --rc genhtml_legend=1 00:03:38.376 --rc geninfo_all_blocks=1 00:03:38.376 --rc geninfo_unexecuted_blocks=1 00:03:38.376 00:03:38.376 ' 00:03:38.376 11:42:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.376 11:42:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.376 11:42:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:38.376 ************************************ 00:03:38.376 START TEST env_memory 00:03:38.376 ************************************ 00:03:38.376 11:42:46 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:38.376 00:03:38.376 00:03:38.376 CUnit - A unit testing framework for C - Version 2.1-3 00:03:38.376 http://cunit.sourceforge.net/ 00:03:38.376 00:03:38.376 00:03:38.376 Suite: memory 00:03:38.376 Test: alloc and free memory map ...[2024-12-09 11:42:46.203025] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:38.376 passed 00:03:38.377 Test: mem map translation ...[2024-12-09 11:42:46.220640] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:38.377 [2024-12-09 11:42:46.220653] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:38.377 [2024-12-09 11:42:46.220686] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:38.377 [2024-12-09 11:42:46.220692] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:38.377 passed 00:03:38.377 Test: mem map registration ...[2024-12-09 11:42:46.256232] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:38.377 [2024-12-09 11:42:46.256244] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:38.377 passed 00:03:38.377 Test: mem map adjacent registrations ...passed 00:03:38.377 00:03:38.377 Run Summary: Type Total Ran Passed Failed Inactive 00:03:38.377 suites 1 1 n/a 0 0 00:03:38.377 tests 4 4 4 0 0 00:03:38.377 asserts 152 152 152 0 n/a 00:03:38.377 00:03:38.377 Elapsed time = 0.131 seconds 00:03:38.377 00:03:38.377 real 0m0.143s 00:03:38.377 user 0m0.133s 00:03:38.377 sys 0m0.009s 00:03:38.377 11:42:46 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.377 11:42:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:38.377 ************************************ 00:03:38.377 END TEST env_memory 00:03:38.377 ************************************ 00:03:38.377 11:42:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:38.377 11:42:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:38.377 11:42:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.377 11:42:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:38.377 ************************************ 00:03:38.377 START TEST env_vtophys 00:03:38.377 ************************************ 00:03:38.377 11:42:46 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:38.377 EAL: lib.eal log level changed from notice to debug 00:03:38.377 EAL: Detected lcore 0 as core 0 on socket 0 00:03:38.377 EAL: Detected lcore 1 as core 1 on socket 0 00:03:38.377 EAL: Detected lcore 2 as core 2 on socket 0 00:03:38.377 EAL: Detected lcore 3 as core 3 on socket 0 00:03:38.377 EAL: Detected lcore 4 as core 4 on socket 0 00:03:38.377 EAL: Detected lcore 5 as core 5 on socket 0 00:03:38.377 EAL: Detected lcore 6 as core 6 on socket 0 00:03:38.377 EAL: Detected lcore 7 as core 8 on socket 0 00:03:38.377 EAL: Detected lcore 8 as core 9 on socket 0 00:03:38.377 EAL: Detected lcore 9 as core 10 on socket 0 00:03:38.377 EAL: Detected lcore 10 as core 11 on socket 0 00:03:38.377 EAL: Detected lcore 11 as core 12 on socket 0 00:03:38.377 EAL: Detected lcore 12 as core 13 on socket 0 00:03:38.377 EAL: Detected lcore 13 as core 16 on socket 0 00:03:38.377 EAL: Detected lcore 14 as core 17 on socket 0 00:03:38.377 EAL: Detected lcore 15 as core 18 on socket 0 00:03:38.377 EAL: Detected lcore 16 as core 19 on socket 0 00:03:38.377 EAL: Detected lcore 17 as core 20 on socket 0 00:03:38.377 EAL: Detected lcore 18 as core 21 on socket 0 00:03:38.377 EAL: Detected lcore 19 as core 25 on socket 0 00:03:38.377 EAL: Detected lcore 20 as core 26 on socket 0 00:03:38.377 EAL: Detected lcore 21 as core 27 on socket 0 00:03:38.377 EAL: Detected lcore 22 as core 28 on socket 0 00:03:38.377 EAL: Detected lcore 23 as core 29 on socket 0 00:03:38.377 EAL: Detected lcore 24 as core 0 on socket 1 00:03:38.377 EAL: Detected lcore 25 as core 1 on socket 1 00:03:38.377 EAL: Detected lcore 26 as core 2 on socket 1 00:03:38.377 EAL: Detected lcore 27 as core 3 on socket 1 00:03:38.377 EAL: Detected lcore 28 as core 4 on socket 1 00:03:38.377 EAL: Detected lcore 29 as core 5 on socket 1 00:03:38.377 EAL: Detected lcore 30 as core 6 on socket 1 00:03:38.377 EAL: Detected lcore 31 as core 8 on socket 1 00:03:38.377 EAL: Detected lcore 32 as core 10 on socket 1 00:03:38.377 EAL: Detected lcore 33 as core 11 on socket 1 00:03:38.377 EAL: Detected lcore 34 as core 12 on socket 1 00:03:38.377 EAL: Detected lcore 35 as core 13 on socket 1 00:03:38.377 EAL: Detected lcore 36 as core 16 on socket 1 00:03:38.377 EAL: Detected lcore 37 as core 17 on socket 1 00:03:38.377 EAL: Detected lcore 38 as core 18 on socket 1 00:03:38.377 EAL: Detected lcore 39 as core 19 on socket 1 00:03:38.377 EAL: Detected lcore 40 as core 20 on socket 1 00:03:38.377 EAL: Detected lcore 41 as core 21 on socket 1 00:03:38.377 EAL: Detected lcore 42 as core 24 on socket 1 00:03:38.377 EAL: Detected lcore 43 as core 25 on socket 1 00:03:38.377 EAL: Detected lcore 44 as core 26 on socket 1 00:03:38.377 EAL: Detected lcore 45 as core 27 on socket 1 00:03:38.377 EAL: Detected lcore 46 as core 28 on socket 1 00:03:38.377 EAL: Detected lcore 47 as core 29 on socket 1 00:03:38.377 EAL: Detected lcore 48 as core 0 on socket 0 00:03:38.377 EAL: Detected lcore 49 as core 1 on socket 0 00:03:38.377 EAL: Detected lcore 50 as core 2 on socket 0 00:03:38.377 EAL: Detected lcore 51 as core 3 on socket 0 00:03:38.377 EAL: Detected lcore 52 as core 4 on socket 0 00:03:38.377 EAL: Detected lcore 53 as core 5 on socket 0 00:03:38.377 EAL: Detected lcore 54 as core 6 on socket 0 00:03:38.377 EAL: Detected lcore 55 as core 8 on socket 0 00:03:38.377 EAL: Detected lcore 56 as core 9 on socket 0 00:03:38.377 EAL: Detected lcore 57 as core 10 on socket 0 00:03:38.377 EAL: Detected lcore 58 as core 11 on socket 0 00:03:38.377 EAL: Detected lcore 59 as core 12 on socket 0 00:03:38.377 EAL: Detected lcore 60 as core 13 on socket 0 00:03:38.377 EAL: Detected lcore 61 as core 16 on socket 0 00:03:38.377 EAL: Detected lcore 62 as core 17 on socket 0 00:03:38.377 EAL: Detected lcore 63 as core 18 on socket 0 00:03:38.377 EAL: Detected lcore 64 as core 19 on socket 0 00:03:38.377 EAL: Detected lcore 65 as core 20 on socket 0 00:03:38.377 EAL: Detected lcore 66 as core 21 on socket 0 00:03:38.377 EAL: Detected lcore 67 as core 25 on socket 0 00:03:38.377 EAL: Detected lcore 68 as core 26 on socket 0 00:03:38.377 EAL: Detected lcore 69 as core 27 on socket 0 00:03:38.377 EAL: Detected lcore 70 as core 28 on socket 0 00:03:38.377 EAL: Detected lcore 71 as core 29 on socket 0 00:03:38.377 EAL: Detected lcore 72 as core 0 on socket 1 00:03:38.377 EAL: Detected lcore 73 as core 1 on socket 1 00:03:38.377 EAL: Detected lcore 74 as core 2 on socket 1 00:03:38.377 EAL: Detected lcore 75 as core 3 on socket 1 00:03:38.377 EAL: Detected lcore 76 as core 4 on socket 1 00:03:38.377 EAL: Detected lcore 77 as core 5 on socket 1 00:03:38.377 EAL: Detected lcore 78 as core 6 on socket 1 00:03:38.377 EAL: Detected lcore 79 as core 8 on socket 1 00:03:38.377 EAL: Detected lcore 80 as core 10 on socket 1 00:03:38.377 EAL: Detected lcore 81 as core 11 on socket 1 00:03:38.377 EAL: Detected lcore 82 as core 12 on socket 1 00:03:38.377 EAL: Detected lcore 83 as core 13 on socket 1 00:03:38.377 EAL: Detected lcore 84 as core 16 on socket 1 00:03:38.377 EAL: Detected lcore 85 as core 17 on socket 1 00:03:38.377 EAL: Detected lcore 86 as core 18 on socket 1 00:03:38.377 EAL: Detected lcore 87 as core 19 on socket 1 00:03:38.377 EAL: Detected lcore 88 as core 20 on socket 1 00:03:38.377 EAL: Detected lcore 89 as core 21 on socket 1 00:03:38.377 EAL: Detected lcore 90 as core 24 on socket 1 00:03:38.377 EAL: Detected lcore 91 as core 25 on socket 1 00:03:38.377 EAL: Detected lcore 92 as core 26 on socket 1 00:03:38.377 EAL: Detected lcore 93 as core 27 on socket 1 00:03:38.377 EAL: Detected lcore 94 as core 28 on socket 1 00:03:38.377 EAL: Detected lcore 95 as core 29 on socket 1 00:03:38.377 EAL: Maximum logical cores by configuration: 128 00:03:38.377 EAL: Detected CPU lcores: 96 00:03:38.377 EAL: Detected NUMA nodes: 2 00:03:38.377 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:38.377 EAL: Detected shared linkage of DPDK 00:03:38.377 EAL: No shared files mode enabled, IPC will be disabled 00:03:38.377 EAL: Bus pci wants IOVA as 'DC' 00:03:38.377 EAL: Buses did not request a specific IOVA mode. 00:03:38.377 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:38.377 EAL: Selected IOVA mode 'VA' 00:03:38.377 EAL: Probing VFIO support... 00:03:38.377 EAL: IOMMU type 1 (Type 1) is supported 00:03:38.377 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:38.377 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:38.377 EAL: VFIO support initialized 00:03:38.377 EAL: Ask a virtual area of 0x2e000 bytes 00:03:38.377 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:38.377 EAL: Setting up physically contiguous memory... 00:03:38.377 EAL: Setting maximum number of open files to 524288 00:03:38.377 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:38.377 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:38.377 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:38.377 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.377 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:38.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:38.377 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.377 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:38.377 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:38.377 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.377 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:38.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:38.377 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.377 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:38.377 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:38.377 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.377 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:38.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:38.377 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.377 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:38.377 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:38.377 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.377 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:38.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:38.377 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.377 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:38.378 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:38.378 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:38.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.378 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:38.378 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:38.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.378 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:38.378 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:38.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.378 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:38.378 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:38.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.378 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:38.378 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:38.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.378 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:38.378 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:38.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.378 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:38.378 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:38.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:38.378 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:38.378 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:38.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:38.378 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:38.378 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:38.378 EAL: Hugepages will be freed exactly as allocated. 00:03:38.378 EAL: No shared files mode enabled, IPC is disabled 00:03:38.378 EAL: No shared files mode enabled, IPC is disabled 00:03:38.378 EAL: TSC frequency is ~2100000 KHz 00:03:38.378 EAL: Main lcore 0 is ready (tid=7fc41e8e9a00;cpuset=[0]) 00:03:38.378 EAL: Trying to obtain current memory policy. 00:03:38.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.378 EAL: Restoring previous memory policy: 0 00:03:38.378 EAL: request: mp_malloc_sync 00:03:38.378 EAL: No shared files mode enabled, IPC is disabled 00:03:38.378 EAL: Heap on socket 0 was expanded by 2MB 00:03:38.378 EAL: No shared files mode enabled, IPC is disabled 00:03:38.637 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:38.637 EAL: Mem event callback 'spdk:(nil)' registered 00:03:38.637 00:03:38.638 00:03:38.638 CUnit - A unit testing framework for C - Version 2.1-3 00:03:38.638 http://cunit.sourceforge.net/ 00:03:38.638 00:03:38.638 00:03:38.638 Suite: components_suite 00:03:38.638 Test: vtophys_malloc_test ...passed 00:03:38.638 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:38.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.638 EAL: Restoring previous memory policy: 4 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was expanded by 4MB 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was shrunk by 4MB 00:03:38.638 EAL: Trying to obtain current memory policy. 00:03:38.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.638 EAL: Restoring previous memory policy: 4 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was expanded by 6MB 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was shrunk by 6MB 00:03:38.638 EAL: Trying to obtain current memory policy. 00:03:38.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.638 EAL: Restoring previous memory policy: 4 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was expanded by 10MB 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was shrunk by 10MB 00:03:38.638 EAL: Trying to obtain current memory policy. 00:03:38.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.638 EAL: Restoring previous memory policy: 4 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was expanded by 18MB 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was shrunk by 18MB 00:03:38.638 EAL: Trying to obtain current memory policy. 00:03:38.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.638 EAL: Restoring previous memory policy: 4 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was expanded by 34MB 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was shrunk by 34MB 00:03:38.638 EAL: Trying to obtain current memory policy. 00:03:38.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.638 EAL: Restoring previous memory policy: 4 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was expanded by 66MB 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was shrunk by 66MB 00:03:38.638 EAL: Trying to obtain current memory policy. 00:03:38.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.638 EAL: Restoring previous memory policy: 4 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was expanded by 130MB 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was shrunk by 130MB 00:03:38.638 EAL: Trying to obtain current memory policy. 00:03:38.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.638 EAL: Restoring previous memory policy: 4 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.638 EAL: request: mp_malloc_sync 00:03:38.638 EAL: No shared files mode enabled, IPC is disabled 00:03:38.638 EAL: Heap on socket 0 was expanded by 258MB 00:03:38.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.897 EAL: request: mp_malloc_sync 00:03:38.897 EAL: No shared files mode enabled, IPC is disabled 00:03:38.897 EAL: Heap on socket 0 was shrunk by 258MB 00:03:38.897 EAL: Trying to obtain current memory policy. 00:03:38.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.897 EAL: Restoring previous memory policy: 4 00:03:38.897 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.897 EAL: request: mp_malloc_sync 00:03:38.897 EAL: No shared files mode enabled, IPC is disabled 00:03:38.898 EAL: Heap on socket 0 was expanded by 514MB 00:03:38.898 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.157 EAL: request: mp_malloc_sync 00:03:39.157 EAL: No shared files mode enabled, IPC is disabled 00:03:39.157 EAL: Heap on socket 0 was shrunk by 514MB 00:03:39.157 EAL: Trying to obtain current memory policy. 00:03:39.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.157 EAL: Restoring previous memory policy: 4 00:03:39.157 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.157 EAL: request: mp_malloc_sync 00:03:39.157 EAL: No shared files mode enabled, IPC is disabled 00:03:39.157 EAL: Heap on socket 0 was expanded by 1026MB 00:03:39.416 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.676 EAL: request: mp_malloc_sync 00:03:39.676 EAL: No shared files mode enabled, IPC is disabled 00:03:39.676 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:39.676 passed 00:03:39.676 00:03:39.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.676 suites 1 1 n/a 0 0 00:03:39.676 tests 2 2 2 0 0 00:03:39.676 asserts 497 497 497 0 n/a 00:03:39.676 00:03:39.676 Elapsed time = 0.973 seconds 00:03:39.676 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.676 EAL: request: mp_malloc_sync 00:03:39.676 EAL: No shared files mode enabled, IPC is disabled 00:03:39.676 EAL: Heap on socket 0 was shrunk by 2MB 00:03:39.676 EAL: No shared files mode enabled, IPC is disabled 00:03:39.676 EAL: No shared files mode enabled, IPC is disabled 00:03:39.676 EAL: No shared files mode enabled, IPC is disabled 00:03:39.676 00:03:39.676 real 0m1.110s 00:03:39.676 user 0m0.654s 00:03:39.676 sys 0m0.427s 00:03:39.676 11:42:47 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.676 11:42:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:39.676 ************************************ 00:03:39.676 END TEST env_vtophys 00:03:39.676 ************************************ 00:03:39.676 11:42:47 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:39.676 11:42:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.676 11:42:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.676 11:42:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.676 ************************************ 00:03:39.676 START TEST env_pci 00:03:39.676 ************************************ 00:03:39.676 11:42:47 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:39.676 00:03:39.676 00:03:39.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.676 http://cunit.sourceforge.net/ 00:03:39.676 00:03:39.676 00:03:39.676 Suite: pci 00:03:39.676 Test: pci_hook ...[2024-12-09 11:42:47.570197] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3042673 has claimed it 00:03:39.676 EAL: Cannot find device (10000:00:01.0) 00:03:39.676 EAL: Failed to attach device on primary process 00:03:39.676 passed 00:03:39.676 00:03:39.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.676 suites 1 1 n/a 0 0 00:03:39.676 tests 1 1 1 0 0 00:03:39.676 asserts 25 25 25 0 n/a 00:03:39.676 00:03:39.676 Elapsed time = 0.030 seconds 00:03:39.676 00:03:39.676 real 0m0.051s 00:03:39.676 user 0m0.018s 00:03:39.676 sys 0m0.032s 00:03:39.676 11:42:47 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.676 11:42:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:39.676 ************************************ 00:03:39.676 END TEST env_pci 00:03:39.676 ************************************ 00:03:39.676 11:42:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:39.676 11:42:47 env -- env/env.sh@15 -- # uname 00:03:39.676 11:42:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:39.676 11:42:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:39.676 11:42:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:39.676 11:42:47 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:39.676 11:42:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.676 11:42:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.676 ************************************ 00:03:39.676 START TEST env_dpdk_post_init 00:03:39.676 ************************************ 00:03:39.676 11:42:47 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:39.676 EAL: Detected CPU lcores: 96 00:03:39.676 EAL: Detected NUMA nodes: 2 00:03:39.676 EAL: Detected shared linkage of DPDK 00:03:39.676 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:39.676 EAL: Selected IOVA mode 'VA' 00:03:39.676 EAL: VFIO support initialized 00:03:39.936 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:39.936 EAL: Using IOMMU type 1 (Type 1) 00:03:39.936 EAL: Ignore mapping IO port bar(1) 00:03:39.936 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:39.936 EAL: Ignore mapping IO port bar(1) 00:03:39.936 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:39.936 EAL: Ignore mapping IO port bar(1) 00:03:39.936 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:39.936 EAL: Ignore mapping IO port bar(1) 00:03:39.936 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:39.936 EAL: Ignore mapping IO port bar(1) 00:03:39.936 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:39.936 EAL: Ignore mapping IO port bar(1) 00:03:39.936 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:39.936 EAL: Ignore mapping IO port bar(1) 00:03:39.936 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:39.936 EAL: Ignore mapping IO port bar(1) 00:03:39.936 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:40.875 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:40.875 EAL: Ignore mapping IO port bar(1) 00:03:40.875 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:40.875 EAL: Ignore mapping IO port bar(1) 00:03:40.875 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:40.875 EAL: Ignore mapping IO port bar(1) 00:03:40.875 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:40.875 EAL: Ignore mapping IO port bar(1) 00:03:40.875 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:40.875 EAL: Ignore mapping IO port bar(1) 00:03:40.875 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:40.875 EAL: Ignore mapping IO port bar(1) 00:03:40.875 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:40.875 EAL: Ignore mapping IO port bar(1) 00:03:40.875 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:40.875 EAL: Ignore mapping IO port bar(1) 00:03:40.875 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:45.067 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:45.067 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:45.067 Starting DPDK initialization... 00:03:45.067 Starting SPDK post initialization... 00:03:45.067 SPDK NVMe probe 00:03:45.067 Attaching to 0000:5e:00.0 00:03:45.067 Attached to 0000:5e:00.0 00:03:45.067 Cleaning up... 00:03:45.067 00:03:45.067 real 0m4.958s 00:03:45.067 user 0m3.522s 00:03:45.067 sys 0m0.506s 00:03:45.067 11:42:52 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.067 11:42:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:45.067 ************************************ 00:03:45.067 END TEST env_dpdk_post_init 00:03:45.067 ************************************ 00:03:45.067 11:42:52 env -- env/env.sh@26 -- # uname 00:03:45.067 11:42:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:45.067 11:42:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:45.067 11:42:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.067 11:42:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.067 11:42:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.067 ************************************ 00:03:45.067 START TEST env_mem_callbacks 00:03:45.067 ************************************ 00:03:45.067 11:42:52 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:45.067 EAL: Detected CPU lcores: 96 00:03:45.067 EAL: Detected NUMA nodes: 2 00:03:45.067 EAL: Detected shared linkage of DPDK 00:03:45.067 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.067 EAL: Selected IOVA mode 'VA' 00:03:45.067 EAL: VFIO support initialized 00:03:45.067 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.067 00:03:45.067 00:03:45.067 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.067 http://cunit.sourceforge.net/ 00:03:45.067 00:03:45.067 00:03:45.067 Suite: memory 00:03:45.067 Test: test ... 00:03:45.067 register 0x200000200000 2097152 00:03:45.067 malloc 3145728 00:03:45.067 register 0x200000400000 4194304 00:03:45.067 buf 0x200000500000 len 3145728 PASSED 00:03:45.067 malloc 64 00:03:45.067 buf 0x2000004fff40 len 64 PASSED 00:03:45.067 malloc 4194304 00:03:45.067 register 0x200000800000 6291456 00:03:45.067 buf 0x200000a00000 len 4194304 PASSED 00:03:45.067 free 0x200000500000 3145728 00:03:45.067 free 0x2000004fff40 64 00:03:45.067 unregister 0x200000400000 4194304 PASSED 00:03:45.067 free 0x200000a00000 4194304 00:03:45.067 unregister 0x200000800000 6291456 PASSED 00:03:45.067 malloc 8388608 00:03:45.067 register 0x200000400000 10485760 00:03:45.067 buf 0x200000600000 len 8388608 PASSED 00:03:45.067 free 0x200000600000 8388608 00:03:45.067 unregister 0x200000400000 10485760 PASSED 00:03:45.067 passed 00:03:45.067 00:03:45.067 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.067 suites 1 1 n/a 0 0 00:03:45.067 tests 1 1 1 0 0 00:03:45.067 asserts 15 15 15 0 n/a 00:03:45.067 00:03:45.067 Elapsed time = 0.009 seconds 00:03:45.067 00:03:45.067 real 0m0.061s 00:03:45.067 user 0m0.023s 00:03:45.067 sys 0m0.038s 00:03:45.068 11:42:52 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.068 11:42:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:45.068 ************************************ 00:03:45.068 END TEST env_mem_callbacks 00:03:45.068 ************************************ 00:03:45.068 00:03:45.068 real 0m6.859s 00:03:45.068 user 0m4.590s 00:03:45.068 sys 0m1.344s 00:03:45.068 11:42:52 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.068 11:42:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.068 ************************************ 00:03:45.068 END TEST env 00:03:45.068 ************************************ 00:03:45.068 11:42:52 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:45.068 11:42:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.068 11:42:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.068 11:42:52 -- common/autotest_common.sh@10 -- # set +x 00:03:45.068 ************************************ 00:03:45.068 START TEST rpc 00:03:45.068 ************************************ 00:03:45.068 11:42:52 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:45.068 * Looking for test storage... 00:03:45.068 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:45.068 11:42:52 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:45.068 11:42:52 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:45.068 11:42:52 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:45.068 11:42:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.068 11:42:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.068 11:42:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.068 11:42:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.068 11:42:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.068 11:42:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.068 11:42:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.068 11:42:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.068 11:42:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.068 11:42:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.068 11:42:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.068 11:42:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:45.068 11:42:53 rpc -- scripts/common.sh@345 -- # : 1 00:03:45.068 11:42:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.068 11:42:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.068 11:42:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:45.068 11:42:53 rpc -- scripts/common.sh@353 -- # local d=1 00:03:45.068 11:42:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.068 11:42:53 rpc -- scripts/common.sh@355 -- # echo 1 00:03:45.068 11:42:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.068 11:42:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:45.068 11:42:53 rpc -- scripts/common.sh@353 -- # local d=2 00:03:45.068 11:42:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.068 11:42:53 rpc -- scripts/common.sh@355 -- # echo 2 00:03:45.068 11:42:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.068 11:42:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.068 11:42:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.068 11:42:53 rpc -- scripts/common.sh@368 -- # return 0 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.068 --rc genhtml_branch_coverage=1 00:03:45.068 --rc genhtml_function_coverage=1 00:03:45.068 --rc genhtml_legend=1 00:03:45.068 --rc geninfo_all_blocks=1 00:03:45.068 --rc geninfo_unexecuted_blocks=1 00:03:45.068 00:03:45.068 ' 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.068 --rc genhtml_branch_coverage=1 00:03:45.068 --rc genhtml_function_coverage=1 00:03:45.068 --rc genhtml_legend=1 00:03:45.068 --rc geninfo_all_blocks=1 00:03:45.068 --rc geninfo_unexecuted_blocks=1 00:03:45.068 00:03:45.068 ' 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.068 --rc genhtml_branch_coverage=1 00:03:45.068 --rc genhtml_function_coverage=1 00:03:45.068 --rc genhtml_legend=1 00:03:45.068 --rc geninfo_all_blocks=1 00:03:45.068 --rc geninfo_unexecuted_blocks=1 00:03:45.068 00:03:45.068 ' 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.068 --rc genhtml_branch_coverage=1 00:03:45.068 --rc genhtml_function_coverage=1 00:03:45.068 --rc genhtml_legend=1 00:03:45.068 --rc geninfo_all_blocks=1 00:03:45.068 --rc geninfo_unexecuted_blocks=1 00:03:45.068 00:03:45.068 ' 00:03:45.068 11:42:53 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:45.068 11:42:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3043724 00:03:45.068 11:42:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.068 11:42:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3043724 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@835 -- # '[' -z 3043724 ']' 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:45.068 11:42:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.068 [2024-12-09 11:42:53.101822] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:03:45.068 [2024-12-09 11:42:53.101867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043724 ] 00:03:45.328 [2024-12-09 11:42:53.180212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.328 [2024-12-09 11:42:53.221741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:45.328 [2024-12-09 11:42:53.221778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3043724' to capture a snapshot of events at runtime. 00:03:45.328 [2024-12-09 11:42:53.221786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:45.328 [2024-12-09 11:42:53.221792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:45.328 [2024-12-09 11:42:53.221796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3043724 for offline analysis/debug. 00:03:45.328 [2024-12-09 11:42:53.222317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.587 11:42:53 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:45.587 11:42:53 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:45.587 11:42:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:45.587 11:42:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:45.587 11:42:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:45.587 11:42:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:45.587 11:42:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.587 11:42:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.587 11:42:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.587 ************************************ 00:03:45.587 START TEST rpc_integrity 00:03:45.587 ************************************ 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:45.587 { 00:03:45.587 "name": "Malloc0", 00:03:45.587 "aliases": [ 00:03:45.587 "f3543c96-08d1-4033-913c-98e1dd84987c" 00:03:45.587 ], 00:03:45.587 "product_name": "Malloc disk", 00:03:45.587 "block_size": 512, 00:03:45.587 "num_blocks": 16384, 00:03:45.587 "uuid": "f3543c96-08d1-4033-913c-98e1dd84987c", 00:03:45.587 "assigned_rate_limits": { 00:03:45.587 "rw_ios_per_sec": 0, 00:03:45.587 "rw_mbytes_per_sec": 0, 00:03:45.587 "r_mbytes_per_sec": 0, 00:03:45.587 "w_mbytes_per_sec": 0 00:03:45.587 }, 00:03:45.587 "claimed": false, 00:03:45.587 "zoned": false, 00:03:45.587 "supported_io_types": { 00:03:45.587 "read": true, 00:03:45.587 "write": true, 00:03:45.587 "unmap": true, 00:03:45.587 "flush": true, 00:03:45.587 "reset": true, 00:03:45.587 "nvme_admin": false, 00:03:45.587 "nvme_io": false, 00:03:45.587 "nvme_io_md": false, 00:03:45.587 "write_zeroes": true, 00:03:45.587 "zcopy": true, 00:03:45.587 "get_zone_info": false, 00:03:45.587 "zone_management": false, 00:03:45.587 "zone_append": false, 00:03:45.587 "compare": false, 00:03:45.587 "compare_and_write": false, 00:03:45.587 "abort": true, 00:03:45.587 "seek_hole": false, 00:03:45.587 "seek_data": false, 00:03:45.587 "copy": true, 00:03:45.587 "nvme_iov_md": false 00:03:45.587 }, 00:03:45.587 "memory_domains": [ 00:03:45.587 { 00:03:45.587 "dma_device_id": "system", 00:03:45.587 "dma_device_type": 1 00:03:45.587 }, 00:03:45.587 { 00:03:45.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:45.587 "dma_device_type": 2 00:03:45.587 } 00:03:45.587 ], 00:03:45.587 "driver_specific": {} 00:03:45.587 } 00:03:45.587 ]' 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.587 [2024-12-09 11:42:53.591678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:45.587 [2024-12-09 11:42:53.591704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:45.587 [2024-12-09 11:42:53.591716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13176e0 00:03:45.587 [2024-12-09 11:42:53.591723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:45.587 [2024-12-09 11:42:53.592794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:45.587 [2024-12-09 11:42:53.592821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:45.587 Passthru0 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.587 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.587 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:45.587 { 00:03:45.587 "name": "Malloc0", 00:03:45.587 "aliases": [ 00:03:45.587 "f3543c96-08d1-4033-913c-98e1dd84987c" 00:03:45.587 ], 00:03:45.588 "product_name": "Malloc disk", 00:03:45.588 "block_size": 512, 00:03:45.588 "num_blocks": 16384, 00:03:45.588 "uuid": "f3543c96-08d1-4033-913c-98e1dd84987c", 00:03:45.588 "assigned_rate_limits": { 00:03:45.588 "rw_ios_per_sec": 0, 00:03:45.588 "rw_mbytes_per_sec": 0, 00:03:45.588 "r_mbytes_per_sec": 0, 00:03:45.588 "w_mbytes_per_sec": 0 00:03:45.588 }, 00:03:45.588 "claimed": true, 00:03:45.588 "claim_type": "exclusive_write", 00:03:45.588 "zoned": false, 00:03:45.588 "supported_io_types": { 00:03:45.588 "read": true, 00:03:45.588 "write": true, 00:03:45.588 "unmap": true, 00:03:45.588 "flush": true, 00:03:45.588 "reset": true, 00:03:45.588 "nvme_admin": false, 00:03:45.588 "nvme_io": false, 00:03:45.588 "nvme_io_md": false, 00:03:45.588 "write_zeroes": true, 00:03:45.588 "zcopy": true, 00:03:45.588 "get_zone_info": false, 00:03:45.588 "zone_management": false, 00:03:45.588 "zone_append": false, 00:03:45.588 "compare": false, 00:03:45.588 "compare_and_write": false, 00:03:45.588 "abort": true, 00:03:45.588 "seek_hole": false, 00:03:45.588 "seek_data": false, 00:03:45.588 "copy": true, 00:03:45.588 "nvme_iov_md": false 00:03:45.588 }, 00:03:45.588 "memory_domains": [ 00:03:45.588 { 00:03:45.588 "dma_device_id": "system", 00:03:45.588 "dma_device_type": 1 00:03:45.588 }, 00:03:45.588 { 00:03:45.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:45.588 "dma_device_type": 2 00:03:45.588 } 00:03:45.588 ], 00:03:45.588 "driver_specific": {} 00:03:45.588 }, 00:03:45.588 { 00:03:45.588 "name": "Passthru0", 00:03:45.588 "aliases": [ 00:03:45.588 "0c08c717-430e-5b48-aec7-a823353d74f9" 00:03:45.588 ], 00:03:45.588 "product_name": "passthru", 00:03:45.588 "block_size": 512, 00:03:45.588 "num_blocks": 16384, 00:03:45.588 "uuid": "0c08c717-430e-5b48-aec7-a823353d74f9", 00:03:45.588 "assigned_rate_limits": { 00:03:45.588 "rw_ios_per_sec": 0, 00:03:45.588 "rw_mbytes_per_sec": 0, 00:03:45.588 "r_mbytes_per_sec": 0, 00:03:45.588 "w_mbytes_per_sec": 0 00:03:45.588 }, 00:03:45.588 "claimed": false, 00:03:45.588 "zoned": false, 00:03:45.588 "supported_io_types": { 00:03:45.588 "read": true, 00:03:45.588 "write": true, 00:03:45.588 "unmap": true, 00:03:45.588 "flush": true, 00:03:45.588 "reset": true, 00:03:45.588 "nvme_admin": false, 00:03:45.588 "nvme_io": false, 00:03:45.588 "nvme_io_md": false, 00:03:45.588 "write_zeroes": true, 00:03:45.588 "zcopy": true, 00:03:45.588 "get_zone_info": false, 00:03:45.588 "zone_management": false, 00:03:45.588 "zone_append": false, 00:03:45.588 "compare": false, 00:03:45.588 "compare_and_write": false, 00:03:45.588 "abort": true, 00:03:45.588 "seek_hole": false, 00:03:45.588 "seek_data": false, 00:03:45.588 "copy": true, 00:03:45.588 "nvme_iov_md": false 00:03:45.588 }, 00:03:45.588 "memory_domains": [ 00:03:45.588 { 00:03:45.588 "dma_device_id": "system", 00:03:45.588 "dma_device_type": 1 00:03:45.588 }, 00:03:45.588 { 00:03:45.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:45.588 "dma_device_type": 2 00:03:45.588 } 00:03:45.588 ], 00:03:45.588 "driver_specific": { 00:03:45.588 "passthru": { 00:03:45.588 "name": "Passthru0", 00:03:45.588 "base_bdev_name": "Malloc0" 00:03:45.588 } 00:03:45.588 } 00:03:45.588 } 00:03:45.588 ]' 00:03:45.588 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:45.847 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:45.847 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.847 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.847 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.847 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:45.847 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:45.847 11:42:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:45.847 00:03:45.847 real 0m0.276s 00:03:45.847 user 0m0.182s 00:03:45.847 sys 0m0.029s 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.847 11:42:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.847 ************************************ 00:03:45.847 END TEST rpc_integrity 00:03:45.847 ************************************ 00:03:45.847 11:42:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:45.847 11:42:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.847 11:42:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.847 11:42:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.847 ************************************ 00:03:45.847 START TEST rpc_plugins 00:03:45.847 ************************************ 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:45.847 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.847 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:45.847 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.847 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:45.847 { 00:03:45.847 "name": "Malloc1", 00:03:45.847 "aliases": [ 00:03:45.847 "a3c1cc3a-e8cd-4129-9baf-244a15aa3c32" 00:03:45.847 ], 00:03:45.847 "product_name": "Malloc disk", 00:03:45.847 "block_size": 4096, 00:03:45.847 "num_blocks": 256, 00:03:45.847 "uuid": "a3c1cc3a-e8cd-4129-9baf-244a15aa3c32", 00:03:45.847 "assigned_rate_limits": { 00:03:45.847 "rw_ios_per_sec": 0, 00:03:45.847 "rw_mbytes_per_sec": 0, 00:03:45.847 "r_mbytes_per_sec": 0, 00:03:45.847 "w_mbytes_per_sec": 0 00:03:45.847 }, 00:03:45.847 "claimed": false, 00:03:45.847 "zoned": false, 00:03:45.847 "supported_io_types": { 00:03:45.847 "read": true, 00:03:45.847 "write": true, 00:03:45.847 "unmap": true, 00:03:45.847 "flush": true, 00:03:45.847 "reset": true, 00:03:45.847 "nvme_admin": false, 00:03:45.847 "nvme_io": false, 00:03:45.847 "nvme_io_md": false, 00:03:45.847 "write_zeroes": true, 00:03:45.847 "zcopy": true, 00:03:45.847 "get_zone_info": false, 00:03:45.847 "zone_management": false, 00:03:45.847 "zone_append": false, 00:03:45.847 "compare": false, 00:03:45.847 "compare_and_write": false, 00:03:45.847 "abort": true, 00:03:45.847 "seek_hole": false, 00:03:45.847 "seek_data": false, 00:03:45.847 "copy": true, 00:03:45.847 "nvme_iov_md": false 00:03:45.847 }, 00:03:45.847 "memory_domains": [ 00:03:45.847 { 00:03:45.847 "dma_device_id": "system", 00:03:45.847 "dma_device_type": 1 00:03:45.847 }, 00:03:45.847 { 00:03:45.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:45.847 "dma_device_type": 2 00:03:45.847 } 00:03:45.847 ], 00:03:45.847 "driver_specific": {} 00:03:45.847 } 00:03:45.847 ]' 00:03:45.847 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:45.847 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:45.847 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:45.847 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:45.848 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:45.848 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:45.848 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.106 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.106 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:46.106 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:46.106 11:42:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:46.106 00:03:46.106 real 0m0.135s 00:03:46.106 user 0m0.083s 00:03:46.106 sys 0m0.018s 00:03:46.106 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.106 11:42:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.106 ************************************ 00:03:46.106 END TEST rpc_plugins 00:03:46.106 ************************************ 00:03:46.106 11:42:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:46.106 11:42:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.106 11:42:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.106 11:42:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.106 ************************************ 00:03:46.106 START TEST rpc_trace_cmd_test 00:03:46.106 ************************************ 00:03:46.106 11:42:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:46.106 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:46.106 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:46.106 11:42:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.106 11:42:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:46.106 11:42:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.106 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:46.106 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3043724", 00:03:46.106 "tpoint_group_mask": "0x8", 00:03:46.106 "iscsi_conn": { 00:03:46.106 "mask": "0x2", 00:03:46.106 "tpoint_mask": "0x0" 00:03:46.106 }, 00:03:46.106 "scsi": { 00:03:46.106 "mask": "0x4", 00:03:46.106 "tpoint_mask": "0x0" 00:03:46.106 }, 00:03:46.106 "bdev": { 00:03:46.106 "mask": "0x8", 00:03:46.106 "tpoint_mask": "0xffffffffffffffff" 00:03:46.106 }, 00:03:46.106 "nvmf_rdma": { 00:03:46.106 "mask": "0x10", 00:03:46.106 "tpoint_mask": "0x0" 00:03:46.106 }, 00:03:46.106 "nvmf_tcp": { 00:03:46.106 "mask": "0x20", 00:03:46.106 "tpoint_mask": "0x0" 00:03:46.106 }, 00:03:46.106 "ftl": { 00:03:46.107 "mask": "0x40", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "blobfs": { 00:03:46.107 "mask": "0x80", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "dsa": { 00:03:46.107 "mask": "0x200", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "thread": { 00:03:46.107 "mask": "0x400", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "nvme_pcie": { 00:03:46.107 "mask": "0x800", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "iaa": { 00:03:46.107 "mask": "0x1000", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "nvme_tcp": { 00:03:46.107 "mask": "0x2000", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "bdev_nvme": { 00:03:46.107 "mask": "0x4000", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "sock": { 00:03:46.107 "mask": "0x8000", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "blob": { 00:03:46.107 "mask": "0x10000", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "bdev_raid": { 00:03:46.107 "mask": "0x20000", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 }, 00:03:46.107 "scheduler": { 00:03:46.107 "mask": "0x40000", 00:03:46.107 "tpoint_mask": "0x0" 00:03:46.107 } 00:03:46.107 }' 00:03:46.107 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:46.107 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:46.107 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:46.107 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:46.107 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:46.107 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:46.107 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:46.366 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:46.366 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:46.366 11:42:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:46.366 00:03:46.366 real 0m0.225s 00:03:46.366 user 0m0.189s 00:03:46.366 sys 0m0.027s 00:03:46.366 11:42:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.366 11:42:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:46.366 ************************************ 00:03:46.366 END TEST rpc_trace_cmd_test 00:03:46.366 ************************************ 00:03:46.366 11:42:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:46.366 11:42:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:46.366 11:42:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:46.366 11:42:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.366 11:42:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.366 11:42:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.366 ************************************ 00:03:46.366 START TEST rpc_daemon_integrity 00:03:46.366 ************************************ 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.366 { 00:03:46.366 "name": "Malloc2", 00:03:46.366 "aliases": [ 00:03:46.366 "1dcd6087-7249-48c9-88da-03c5da0ca677" 00:03:46.366 ], 00:03:46.366 "product_name": "Malloc disk", 00:03:46.366 "block_size": 512, 00:03:46.366 "num_blocks": 16384, 00:03:46.366 "uuid": "1dcd6087-7249-48c9-88da-03c5da0ca677", 00:03:46.366 "assigned_rate_limits": { 00:03:46.366 "rw_ios_per_sec": 0, 00:03:46.366 "rw_mbytes_per_sec": 0, 00:03:46.366 "r_mbytes_per_sec": 0, 00:03:46.366 "w_mbytes_per_sec": 0 00:03:46.366 }, 00:03:46.366 "claimed": false, 00:03:46.366 "zoned": false, 00:03:46.366 "supported_io_types": { 00:03:46.366 "read": true, 00:03:46.366 "write": true, 00:03:46.366 "unmap": true, 00:03:46.366 "flush": true, 00:03:46.366 "reset": true, 00:03:46.366 "nvme_admin": false, 00:03:46.366 "nvme_io": false, 00:03:46.366 "nvme_io_md": false, 00:03:46.366 "write_zeroes": true, 00:03:46.366 "zcopy": true, 00:03:46.366 "get_zone_info": false, 00:03:46.366 "zone_management": false, 00:03:46.366 "zone_append": false, 00:03:46.366 "compare": false, 00:03:46.366 "compare_and_write": false, 00:03:46.366 "abort": true, 00:03:46.366 "seek_hole": false, 00:03:46.366 "seek_data": false, 00:03:46.366 "copy": true, 00:03:46.366 "nvme_iov_md": false 00:03:46.366 }, 00:03:46.366 "memory_domains": [ 00:03:46.366 { 00:03:46.366 "dma_device_id": "system", 00:03:46.366 "dma_device_type": 1 00:03:46.366 }, 00:03:46.366 { 00:03:46.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.366 "dma_device_type": 2 00:03:46.366 } 00:03:46.366 ], 00:03:46.366 "driver_specific": {} 00:03:46.366 } 00:03:46.366 ]' 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:46.366 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.626 [2024-12-09 11:42:54.425929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:46.626 [2024-12-09 11:42:54.425955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.626 [2024-12-09 11:42:54.425969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1317b30 00:03:46.626 [2024-12-09 11:42:54.425976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.626 [2024-12-09 11:42:54.426953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.626 [2024-12-09 11:42:54.426973] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.626 Passthru0 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.626 { 00:03:46.626 "name": "Malloc2", 00:03:46.626 "aliases": [ 00:03:46.626 "1dcd6087-7249-48c9-88da-03c5da0ca677" 00:03:46.626 ], 00:03:46.626 "product_name": "Malloc disk", 00:03:46.626 "block_size": 512, 00:03:46.626 "num_blocks": 16384, 00:03:46.626 "uuid": "1dcd6087-7249-48c9-88da-03c5da0ca677", 00:03:46.626 "assigned_rate_limits": { 00:03:46.626 "rw_ios_per_sec": 0, 00:03:46.626 "rw_mbytes_per_sec": 0, 00:03:46.626 "r_mbytes_per_sec": 0, 00:03:46.626 "w_mbytes_per_sec": 0 00:03:46.626 }, 00:03:46.626 "claimed": true, 00:03:46.626 "claim_type": "exclusive_write", 00:03:46.626 "zoned": false, 00:03:46.626 "supported_io_types": { 00:03:46.626 "read": true, 00:03:46.626 "write": true, 00:03:46.626 "unmap": true, 00:03:46.626 "flush": true, 00:03:46.626 "reset": true, 00:03:46.626 "nvme_admin": false, 00:03:46.626 "nvme_io": false, 00:03:46.626 "nvme_io_md": false, 00:03:46.626 "write_zeroes": true, 00:03:46.626 "zcopy": true, 00:03:46.626 "get_zone_info": false, 00:03:46.626 "zone_management": false, 00:03:46.626 "zone_append": false, 00:03:46.626 "compare": false, 00:03:46.626 "compare_and_write": false, 00:03:46.626 "abort": true, 00:03:46.626 "seek_hole": false, 00:03:46.626 "seek_data": false, 00:03:46.626 "copy": true, 00:03:46.626 "nvme_iov_md": false 00:03:46.626 }, 00:03:46.626 "memory_domains": [ 00:03:46.626 { 00:03:46.626 "dma_device_id": "system", 00:03:46.626 "dma_device_type": 1 00:03:46.626 }, 00:03:46.626 { 00:03:46.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.626 "dma_device_type": 2 00:03:46.626 } 00:03:46.626 ], 00:03:46.626 "driver_specific": {} 00:03:46.626 }, 00:03:46.626 { 00:03:46.626 "name": "Passthru0", 00:03:46.626 "aliases": [ 00:03:46.626 "e566a191-284c-52f5-9347-c7113f2aa4e9" 00:03:46.626 ], 00:03:46.626 "product_name": "passthru", 00:03:46.626 "block_size": 512, 00:03:46.626 "num_blocks": 16384, 00:03:46.626 "uuid": "e566a191-284c-52f5-9347-c7113f2aa4e9", 00:03:46.626 "assigned_rate_limits": { 00:03:46.626 "rw_ios_per_sec": 0, 00:03:46.626 "rw_mbytes_per_sec": 0, 00:03:46.626 "r_mbytes_per_sec": 0, 00:03:46.626 "w_mbytes_per_sec": 0 00:03:46.626 }, 00:03:46.626 "claimed": false, 00:03:46.626 "zoned": false, 00:03:46.626 "supported_io_types": { 00:03:46.626 "read": true, 00:03:46.626 "write": true, 00:03:46.626 "unmap": true, 00:03:46.626 "flush": true, 00:03:46.626 "reset": true, 00:03:46.626 "nvme_admin": false, 00:03:46.626 "nvme_io": false, 00:03:46.626 "nvme_io_md": false, 00:03:46.626 "write_zeroes": true, 00:03:46.626 "zcopy": true, 00:03:46.626 "get_zone_info": false, 00:03:46.626 "zone_management": false, 00:03:46.626 "zone_append": false, 00:03:46.626 "compare": false, 00:03:46.626 "compare_and_write": false, 00:03:46.626 "abort": true, 00:03:46.626 "seek_hole": false, 00:03:46.626 "seek_data": false, 00:03:46.626 "copy": true, 00:03:46.626 "nvme_iov_md": false 00:03:46.626 }, 00:03:46.626 "memory_domains": [ 00:03:46.626 { 00:03:46.626 "dma_device_id": "system", 00:03:46.626 "dma_device_type": 1 00:03:46.626 }, 00:03:46.626 { 00:03:46.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.626 "dma_device_type": 2 00:03:46.626 } 00:03:46.626 ], 00:03:46.626 "driver_specific": { 00:03:46.626 "passthru": { 00:03:46.626 "name": "Passthru0", 00:03:46.626 "base_bdev_name": "Malloc2" 00:03:46.626 } 00:03:46.626 } 00:03:46.626 } 00:03:46.626 ]' 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:46.626 00:03:46.626 real 0m0.260s 00:03:46.626 user 0m0.159s 00:03:46.626 sys 0m0.039s 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.626 11:42:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.626 ************************************ 00:03:46.626 END TEST rpc_daemon_integrity 00:03:46.626 ************************************ 00:03:46.626 11:42:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:46.626 11:42:54 rpc -- rpc/rpc.sh@84 -- # killprocess 3043724 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@954 -- # '[' -z 3043724 ']' 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@958 -- # kill -0 3043724 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@959 -- # uname 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043724 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043724' 00:03:46.626 killing process with pid 3043724 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@973 -- # kill 3043724 00:03:46.626 11:42:54 rpc -- common/autotest_common.sh@978 -- # wait 3043724 00:03:47.196 00:03:47.196 real 0m2.057s 00:03:47.196 user 0m2.637s 00:03:47.196 sys 0m0.678s 00:03:47.196 11:42:54 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.196 11:42:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.196 ************************************ 00:03:47.196 END TEST rpc 00:03:47.196 ************************************ 00:03:47.196 11:42:54 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:47.196 11:42:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.196 11:42:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.196 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:03:47.196 ************************************ 00:03:47.196 START TEST skip_rpc 00:03:47.196 ************************************ 00:03:47.196 11:42:55 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:47.196 * Looking for test storage... 00:03:47.196 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:47.196 11:42:55 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:47.196 11:42:55 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:47.196 11:42:55 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:47.196 11:42:55 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.196 11:42:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:47.196 11:42:55 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.196 11:42:55 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:47.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.196 --rc genhtml_branch_coverage=1 00:03:47.196 --rc genhtml_function_coverage=1 00:03:47.196 --rc genhtml_legend=1 00:03:47.196 --rc geninfo_all_blocks=1 00:03:47.196 --rc geninfo_unexecuted_blocks=1 00:03:47.196 00:03:47.196 ' 00:03:47.196 11:42:55 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:47.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.196 --rc genhtml_branch_coverage=1 00:03:47.196 --rc genhtml_function_coverage=1 00:03:47.196 --rc genhtml_legend=1 00:03:47.196 --rc geninfo_all_blocks=1 00:03:47.196 --rc geninfo_unexecuted_blocks=1 00:03:47.196 00:03:47.196 ' 00:03:47.196 11:42:55 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:47.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.196 --rc genhtml_branch_coverage=1 00:03:47.196 --rc genhtml_function_coverage=1 00:03:47.196 --rc genhtml_legend=1 00:03:47.196 --rc geninfo_all_blocks=1 00:03:47.197 --rc geninfo_unexecuted_blocks=1 00:03:47.197 00:03:47.197 ' 00:03:47.197 11:42:55 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:47.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.197 --rc genhtml_branch_coverage=1 00:03:47.197 --rc genhtml_function_coverage=1 00:03:47.197 --rc genhtml_legend=1 00:03:47.197 --rc geninfo_all_blocks=1 00:03:47.197 --rc geninfo_unexecuted_blocks=1 00:03:47.197 00:03:47.197 ' 00:03:47.197 11:42:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:47.197 11:42:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:47.197 11:42:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:47.197 11:42:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.197 11:42:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.197 11:42:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.197 ************************************ 00:03:47.197 START TEST skip_rpc 00:03:47.197 ************************************ 00:03:47.197 11:42:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:47.197 11:42:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3044261 00:03:47.197 11:42:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:47.197 11:42:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:47.197 11:42:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:47.455 [2024-12-09 11:42:55.280128] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:03:47.455 [2024-12-09 11:42:55.280169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044261 ] 00:03:47.455 [2024-12-09 11:42:55.360541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.455 [2024-12-09 11:42:55.403955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:52.745 11:43:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3044261 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3044261 ']' 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3044261 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044261 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044261' 00:03:52.746 killing process with pid 3044261 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3044261 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3044261 00:03:52.746 00:03:52.746 real 0m5.367s 00:03:52.746 user 0m5.134s 00:03:52.746 sys 0m0.271s 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.746 11:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.746 ************************************ 00:03:52.746 END TEST skip_rpc 00:03:52.746 ************************************ 00:03:52.746 11:43:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:52.746 11:43:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.746 11:43:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.746 11:43:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.746 ************************************ 00:03:52.746 START TEST skip_rpc_with_json 00:03:52.746 ************************************ 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3045188 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3045188 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3045188 ']' 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:52.746 11:43:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.746 [2024-12-09 11:43:00.716079] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:03:52.746 [2024-12-09 11:43:00.716123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045188 ] 00:03:52.746 [2024-12-09 11:43:00.795089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.006 [2024-12-09 11:43:00.837004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.006 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:53.006 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:53.006 11:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:53.006 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.006 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.006 [2024-12-09 11:43:01.051380] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:53.006 request: 00:03:53.006 { 00:03:53.006 "trtype": "tcp", 00:03:53.006 "method": "nvmf_get_transports", 00:03:53.006 "req_id": 1 00:03:53.006 } 00:03:53.006 Got JSON-RPC error response 00:03:53.006 response: 00:03:53.006 { 00:03:53.006 "code": -19, 00:03:53.006 "message": "No such device" 00:03:53.006 } 00:03:53.006 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:53.006 11:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:53.006 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.006 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.266 [2024-12-09 11:43:01.063486] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:53.266 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.266 11:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:53.266 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.266 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.266 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.266 11:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:53.266 { 00:03:53.266 "subsystems": [ 00:03:53.266 { 00:03:53.266 "subsystem": "fsdev", 00:03:53.266 "config": [ 00:03:53.266 { 00:03:53.266 "method": "fsdev_set_opts", 00:03:53.266 "params": { 00:03:53.266 "fsdev_io_pool_size": 65535, 00:03:53.266 "fsdev_io_cache_size": 256 00:03:53.266 } 00:03:53.266 } 00:03:53.266 ] 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "subsystem": "keyring", 00:03:53.266 "config": [] 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "subsystem": "iobuf", 00:03:53.266 "config": [ 00:03:53.266 { 00:03:53.266 "method": "iobuf_set_options", 00:03:53.266 "params": { 00:03:53.266 "small_pool_count": 8192, 00:03:53.266 "large_pool_count": 1024, 00:03:53.266 "small_bufsize": 8192, 00:03:53.266 "large_bufsize": 135168, 00:03:53.266 "enable_numa": false 00:03:53.266 } 00:03:53.266 } 00:03:53.266 ] 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "subsystem": "sock", 00:03:53.266 "config": [ 00:03:53.266 { 00:03:53.266 "method": "sock_set_default_impl", 00:03:53.266 "params": { 00:03:53.266 "impl_name": "posix" 00:03:53.266 } 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "method": "sock_impl_set_options", 00:03:53.266 "params": { 00:03:53.266 "impl_name": "ssl", 00:03:53.266 "recv_buf_size": 4096, 00:03:53.266 "send_buf_size": 4096, 00:03:53.266 "enable_recv_pipe": true, 00:03:53.266 "enable_quickack": false, 00:03:53.266 "enable_placement_id": 0, 00:03:53.266 "enable_zerocopy_send_server": true, 00:03:53.266 "enable_zerocopy_send_client": false, 00:03:53.266 "zerocopy_threshold": 0, 00:03:53.266 "tls_version": 0, 00:03:53.266 "enable_ktls": false 00:03:53.266 } 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "method": "sock_impl_set_options", 00:03:53.266 "params": { 00:03:53.266 "impl_name": "posix", 00:03:53.266 "recv_buf_size": 2097152, 00:03:53.266 "send_buf_size": 2097152, 00:03:53.266 "enable_recv_pipe": true, 00:03:53.266 "enable_quickack": false, 00:03:53.266 "enable_placement_id": 0, 00:03:53.266 "enable_zerocopy_send_server": true, 00:03:53.266 "enable_zerocopy_send_client": false, 00:03:53.266 "zerocopy_threshold": 0, 00:03:53.266 "tls_version": 0, 00:03:53.266 "enable_ktls": false 00:03:53.266 } 00:03:53.266 } 00:03:53.266 ] 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "subsystem": "vmd", 00:03:53.266 "config": [] 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "subsystem": "accel", 00:03:53.266 "config": [ 00:03:53.266 { 00:03:53.266 "method": "accel_set_options", 00:03:53.266 "params": { 00:03:53.266 "small_cache_size": 128, 00:03:53.266 "large_cache_size": 16, 00:03:53.266 "task_count": 2048, 00:03:53.266 "sequence_count": 2048, 00:03:53.266 "buf_count": 2048 00:03:53.266 } 00:03:53.266 } 00:03:53.266 ] 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "subsystem": "bdev", 00:03:53.266 "config": [ 00:03:53.266 { 00:03:53.266 "method": "bdev_set_options", 00:03:53.266 "params": { 00:03:53.266 "bdev_io_pool_size": 65535, 00:03:53.266 "bdev_io_cache_size": 256, 00:03:53.266 "bdev_auto_examine": true, 00:03:53.266 "iobuf_small_cache_size": 128, 00:03:53.266 "iobuf_large_cache_size": 16 00:03:53.266 } 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "method": "bdev_raid_set_options", 00:03:53.266 "params": { 00:03:53.266 "process_window_size_kb": 1024, 00:03:53.266 "process_max_bandwidth_mb_sec": 0 00:03:53.266 } 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "method": "bdev_iscsi_set_options", 00:03:53.266 "params": { 00:03:53.266 "timeout_sec": 30 00:03:53.266 } 00:03:53.266 }, 00:03:53.266 { 00:03:53.266 "method": "bdev_nvme_set_options", 00:03:53.266 "params": { 00:03:53.266 "action_on_timeout": "none", 00:03:53.266 "timeout_us": 0, 00:03:53.266 "timeout_admin_us": 0, 00:03:53.266 "keep_alive_timeout_ms": 10000, 00:03:53.266 "arbitration_burst": 0, 00:03:53.266 "low_priority_weight": 0, 00:03:53.266 "medium_priority_weight": 0, 00:03:53.266 "high_priority_weight": 0, 00:03:53.266 "nvme_adminq_poll_period_us": 10000, 00:03:53.266 "nvme_ioq_poll_period_us": 0, 00:03:53.266 "io_queue_requests": 0, 00:03:53.266 "delay_cmd_submit": true, 00:03:53.266 "transport_retry_count": 4, 00:03:53.267 "bdev_retry_count": 3, 00:03:53.267 "transport_ack_timeout": 0, 00:03:53.267 "ctrlr_loss_timeout_sec": 0, 00:03:53.267 "reconnect_delay_sec": 0, 00:03:53.267 "fast_io_fail_timeout_sec": 0, 00:03:53.267 "disable_auto_failback": false, 00:03:53.267 "generate_uuids": false, 00:03:53.267 "transport_tos": 0, 00:03:53.267 "nvme_error_stat": false, 00:03:53.267 "rdma_srq_size": 0, 00:03:53.267 "io_path_stat": false, 00:03:53.267 "allow_accel_sequence": false, 00:03:53.267 "rdma_max_cq_size": 0, 00:03:53.267 "rdma_cm_event_timeout_ms": 0, 00:03:53.267 "dhchap_digests": [ 00:03:53.267 "sha256", 00:03:53.267 "sha384", 00:03:53.267 "sha512" 00:03:53.267 ], 00:03:53.267 "dhchap_dhgroups": [ 00:03:53.267 "null", 00:03:53.267 "ffdhe2048", 00:03:53.267 "ffdhe3072", 00:03:53.267 "ffdhe4096", 00:03:53.267 "ffdhe6144", 00:03:53.267 "ffdhe8192" 00:03:53.267 ] 00:03:53.267 } 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "method": "bdev_nvme_set_hotplug", 00:03:53.267 "params": { 00:03:53.267 "period_us": 100000, 00:03:53.267 "enable": false 00:03:53.267 } 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "method": "bdev_wait_for_examine" 00:03:53.267 } 00:03:53.267 ] 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "subsystem": "scsi", 00:03:53.267 "config": null 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "subsystem": "scheduler", 00:03:53.267 "config": [ 00:03:53.267 { 00:03:53.267 "method": "framework_set_scheduler", 00:03:53.267 "params": { 00:03:53.267 "name": "static" 00:03:53.267 } 00:03:53.267 } 00:03:53.267 ] 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "subsystem": "vhost_scsi", 00:03:53.267 "config": [] 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "subsystem": "vhost_blk", 00:03:53.267 "config": [] 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "subsystem": "ublk", 00:03:53.267 "config": [] 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "subsystem": "nbd", 00:03:53.267 "config": [] 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "subsystem": "nvmf", 00:03:53.267 "config": [ 00:03:53.267 { 00:03:53.267 "method": "nvmf_set_config", 00:03:53.267 "params": { 00:03:53.267 "discovery_filter": "match_any", 00:03:53.267 "admin_cmd_passthru": { 00:03:53.267 "identify_ctrlr": false 00:03:53.267 }, 00:03:53.267 "dhchap_digests": [ 00:03:53.267 "sha256", 00:03:53.267 "sha384", 00:03:53.267 "sha512" 00:03:53.267 ], 00:03:53.267 "dhchap_dhgroups": [ 00:03:53.267 "null", 00:03:53.267 "ffdhe2048", 00:03:53.267 "ffdhe3072", 00:03:53.267 "ffdhe4096", 00:03:53.267 "ffdhe6144", 00:03:53.267 "ffdhe8192" 00:03:53.267 ] 00:03:53.267 } 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "method": "nvmf_set_max_subsystems", 00:03:53.267 "params": { 00:03:53.267 "max_subsystems": 1024 00:03:53.267 } 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "method": "nvmf_set_crdt", 00:03:53.267 "params": { 00:03:53.267 "crdt1": 0, 00:03:53.267 "crdt2": 0, 00:03:53.267 "crdt3": 0 00:03:53.267 } 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "method": "nvmf_create_transport", 00:03:53.267 "params": { 00:03:53.267 "trtype": "TCP", 00:03:53.267 "max_queue_depth": 128, 00:03:53.267 "max_io_qpairs_per_ctrlr": 127, 00:03:53.267 "in_capsule_data_size": 4096, 00:03:53.267 "max_io_size": 131072, 00:03:53.267 "io_unit_size": 131072, 00:03:53.267 "max_aq_depth": 128, 00:03:53.267 "num_shared_buffers": 511, 00:03:53.267 "buf_cache_size": 4294967295, 00:03:53.267 "dif_insert_or_strip": false, 00:03:53.267 "zcopy": false, 00:03:53.267 "c2h_success": true, 00:03:53.267 "sock_priority": 0, 00:03:53.267 "abort_timeout_sec": 1, 00:03:53.267 "ack_timeout": 0, 00:03:53.267 "data_wr_pool_size": 0 00:03:53.267 } 00:03:53.267 } 00:03:53.267 ] 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "subsystem": "iscsi", 00:03:53.267 "config": [ 00:03:53.267 { 00:03:53.267 "method": "iscsi_set_options", 00:03:53.267 "params": { 00:03:53.267 "node_base": "iqn.2016-06.io.spdk", 00:03:53.267 "max_sessions": 128, 00:03:53.267 "max_connections_per_session": 2, 00:03:53.267 "max_queue_depth": 64, 00:03:53.267 "default_time2wait": 2, 00:03:53.267 "default_time2retain": 20, 00:03:53.267 "first_burst_length": 8192, 00:03:53.267 "immediate_data": true, 00:03:53.267 "allow_duplicated_isid": false, 00:03:53.267 "error_recovery_level": 0, 00:03:53.267 "nop_timeout": 60, 00:03:53.267 "nop_in_interval": 30, 00:03:53.267 "disable_chap": false, 00:03:53.267 "require_chap": false, 00:03:53.267 "mutual_chap": false, 00:03:53.267 "chap_group": 0, 00:03:53.267 "max_large_datain_per_connection": 64, 00:03:53.267 "max_r2t_per_connection": 4, 00:03:53.267 "pdu_pool_size": 36864, 00:03:53.267 "immediate_data_pool_size": 16384, 00:03:53.267 "data_out_pool_size": 2048 00:03:53.267 } 00:03:53.267 } 00:03:53.267 ] 00:03:53.267 } 00:03:53.267 ] 00:03:53.267 } 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3045188 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3045188 ']' 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3045188 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3045188 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3045188' 00:03:53.267 killing process with pid 3045188 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3045188 00:03:53.267 11:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3045188 00:03:53.837 11:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3045332 00:03:53.837 11:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:53.837 11:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3045332 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3045332 ']' 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3045332 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3045332 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3045332' 00:03:59.110 killing process with pid 3045332 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3045332 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3045332 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:59.110 00:03:59.110 real 0m6.274s 00:03:59.110 user 0m5.956s 00:03:59.110 sys 0m0.607s 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.110 11:43:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.110 ************************************ 00:03:59.110 END TEST skip_rpc_with_json 00:03:59.110 ************************************ 00:03:59.110 11:43:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:59.110 11:43:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.110 11:43:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.110 11:43:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.110 ************************************ 00:03:59.110 START TEST skip_rpc_with_delay 00:03:59.110 ************************************ 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.111 [2024-12-09 11:43:07.066821] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:59.111 00:03:59.111 real 0m0.067s 00:03:59.111 user 0m0.044s 00:03:59.111 sys 0m0.023s 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.111 11:43:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:59.111 ************************************ 00:03:59.111 END TEST skip_rpc_with_delay 00:03:59.111 ************************************ 00:03:59.111 11:43:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:59.111 11:43:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:59.111 11:43:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:59.111 11:43:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.111 11:43:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.111 11:43:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.111 ************************************ 00:03:59.111 START TEST exit_on_failed_rpc_init 00:03:59.111 ************************************ 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3046305 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3046305 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3046305 ']' 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.111 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.370 [2024-12-09 11:43:07.200741] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:03:59.370 [2024-12-09 11:43:07.200783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046305 ] 00:03:59.370 [2024-12-09 11:43:07.278117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.370 [2024-12-09 11:43:07.320107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:59.630 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:59.630 [2024-12-09 11:43:07.592405] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:03:59.630 [2024-12-09 11:43:07.592451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046416 ] 00:03:59.630 [2024-12-09 11:43:07.666462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.890 [2024-12-09 11:43:07.709281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:59.890 [2024-12-09 11:43:07.709331] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:59.890 [2024-12-09 11:43:07.709340] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:59.890 [2024-12-09 11:43:07.709346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3046305 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3046305 ']' 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3046305 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046305 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046305' 00:03:59.890 killing process with pid 3046305 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3046305 00:03:59.890 11:43:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3046305 00:04:00.149 00:04:00.149 real 0m0.950s 00:04:00.149 user 0m1.014s 00:04:00.149 sys 0m0.385s 00:04:00.149 11:43:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.149 11:43:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:00.149 ************************************ 00:04:00.149 END TEST exit_on_failed_rpc_init 00:04:00.149 ************************************ 00:04:00.149 11:43:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:00.149 00:04:00.149 real 0m13.122s 00:04:00.149 user 0m12.355s 00:04:00.149 sys 0m1.574s 00:04:00.149 11:43:08 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.149 11:43:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.149 ************************************ 00:04:00.149 END TEST skip_rpc 00:04:00.149 ************************************ 00:04:00.149 11:43:08 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:00.149 11:43:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.149 11:43:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.149 11:43:08 -- common/autotest_common.sh@10 -- # set +x 00:04:00.409 ************************************ 00:04:00.409 START TEST rpc_client 00:04:00.409 ************************************ 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:00.409 * Looking for test storage... 00:04:00.409 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.409 11:43:08 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:00.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.409 --rc genhtml_branch_coverage=1 00:04:00.409 --rc genhtml_function_coverage=1 00:04:00.409 --rc genhtml_legend=1 00:04:00.409 --rc geninfo_all_blocks=1 00:04:00.409 --rc geninfo_unexecuted_blocks=1 00:04:00.409 00:04:00.409 ' 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:00.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.409 --rc genhtml_branch_coverage=1 00:04:00.409 --rc genhtml_function_coverage=1 00:04:00.409 --rc genhtml_legend=1 00:04:00.409 --rc geninfo_all_blocks=1 00:04:00.409 --rc geninfo_unexecuted_blocks=1 00:04:00.409 00:04:00.409 ' 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:00.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.409 --rc genhtml_branch_coverage=1 00:04:00.409 --rc genhtml_function_coverage=1 00:04:00.409 --rc genhtml_legend=1 00:04:00.409 --rc geninfo_all_blocks=1 00:04:00.409 --rc geninfo_unexecuted_blocks=1 00:04:00.409 00:04:00.409 ' 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:00.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.409 --rc genhtml_branch_coverage=1 00:04:00.409 --rc genhtml_function_coverage=1 00:04:00.409 --rc genhtml_legend=1 00:04:00.409 --rc geninfo_all_blocks=1 00:04:00.409 --rc geninfo_unexecuted_blocks=1 00:04:00.409 00:04:00.409 ' 00:04:00.409 11:43:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:00.409 OK 00:04:00.409 11:43:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:00.409 00:04:00.409 real 0m0.197s 00:04:00.409 user 0m0.111s 00:04:00.409 sys 0m0.099s 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.409 11:43:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:00.409 ************************************ 00:04:00.409 END TEST rpc_client 00:04:00.409 ************************************ 00:04:00.409 11:43:08 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:00.409 11:43:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.409 11:43:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.409 11:43:08 -- common/autotest_common.sh@10 -- # set +x 00:04:00.670 ************************************ 00:04:00.670 START TEST json_config 00:04:00.670 ************************************ 00:04:00.670 11:43:08 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:00.670 11:43:08 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:00.670 11:43:08 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:00.670 11:43:08 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:00.670 11:43:08 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:00.670 11:43:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.670 11:43:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.670 11:43:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.670 11:43:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.670 11:43:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.670 11:43:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.670 11:43:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.670 11:43:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.670 11:43:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.670 11:43:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.670 11:43:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.670 11:43:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:00.670 11:43:08 json_config -- scripts/common.sh@345 -- # : 1 00:04:00.670 11:43:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.670 11:43:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.670 11:43:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:00.670 11:43:08 json_config -- scripts/common.sh@353 -- # local d=1 00:04:00.670 11:43:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.670 11:43:08 json_config -- scripts/common.sh@355 -- # echo 1 00:04:00.670 11:43:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.670 11:43:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:00.670 11:43:08 json_config -- scripts/common.sh@353 -- # local d=2 00:04:00.670 11:43:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.670 11:43:08 json_config -- scripts/common.sh@355 -- # echo 2 00:04:00.670 11:43:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.670 11:43:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.670 11:43:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.670 11:43:08 json_config -- scripts/common.sh@368 -- # return 0 00:04:00.670 11:43:08 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.670 11:43:08 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:00.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.670 --rc genhtml_branch_coverage=1 00:04:00.670 --rc genhtml_function_coverage=1 00:04:00.670 --rc genhtml_legend=1 00:04:00.670 --rc geninfo_all_blocks=1 00:04:00.670 --rc geninfo_unexecuted_blocks=1 00:04:00.670 00:04:00.670 ' 00:04:00.670 11:43:08 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:00.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.670 --rc genhtml_branch_coverage=1 00:04:00.670 --rc genhtml_function_coverage=1 00:04:00.670 --rc genhtml_legend=1 00:04:00.670 --rc geninfo_all_blocks=1 00:04:00.670 --rc geninfo_unexecuted_blocks=1 00:04:00.670 00:04:00.670 ' 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:00.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.671 --rc genhtml_branch_coverage=1 00:04:00.671 --rc genhtml_function_coverage=1 00:04:00.671 --rc genhtml_legend=1 00:04:00.671 --rc geninfo_all_blocks=1 00:04:00.671 --rc geninfo_unexecuted_blocks=1 00:04:00.671 00:04:00.671 ' 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:00.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.671 --rc genhtml_branch_coverage=1 00:04:00.671 --rc genhtml_function_coverage=1 00:04:00.671 --rc genhtml_legend=1 00:04:00.671 --rc geninfo_all_blocks=1 00:04:00.671 --rc geninfo_unexecuted_blocks=1 00:04:00.671 00:04:00.671 ' 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:00.671 11:43:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:00.671 11:43:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:00.671 11:43:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:00.671 11:43:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:00.671 11:43:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.671 11:43:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.671 11:43:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.671 11:43:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:00.671 11:43:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@51 -- # : 0 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:00.671 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:00.671 11:43:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:00.671 INFO: JSON configuration test init 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.671 11:43:08 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:00.671 11:43:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:00.671 11:43:08 json_config -- json_config/common.sh@10 -- # shift 00:04:00.671 11:43:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:00.671 11:43:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:00.671 11:43:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:00.671 11:43:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.671 11:43:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.671 11:43:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3046671 00:04:00.671 11:43:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:00.671 Waiting for target to run... 00:04:00.671 11:43:08 json_config -- json_config/common.sh@25 -- # waitforlisten 3046671 /var/tmp/spdk_tgt.sock 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@835 -- # '[' -z 3046671 ']' 00:04:00.671 11:43:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:00.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.671 11:43:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.671 [2024-12-09 11:43:08.721344] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:00.671 [2024-12-09 11:43:08.721393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046671 ] 00:04:01.240 [2024-12-09 11:43:09.183250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.240 [2024-12-09 11:43:09.241007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.499 11:43:09 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.499 11:43:09 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:01.499 11:43:09 json_config -- json_config/common.sh@26 -- # echo '' 00:04:01.499 00:04:01.499 11:43:09 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:01.499 11:43:09 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:01.499 11:43:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.499 11:43:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.758 11:43:09 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:01.758 11:43:09 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:01.758 11:43:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.758 11:43:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.758 11:43:09 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:01.758 11:43:09 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:01.758 11:43:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:05.048 11:43:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.048 11:43:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:05.048 11:43:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@54 -- # sort 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:05.048 11:43:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.048 11:43:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:05.048 11:43:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.048 11:43:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:05.048 11:43:12 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:05.048 11:43:12 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:04:05.048 11:43:12 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:05.048 11:43:12 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:05.048 11:43:12 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:05.048 11:43:12 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:05.048 11:43:12 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:05.048 11:43:12 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:05.048 11:43:12 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:05.048 11:43:12 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:04:05.048 11:43:12 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:05.048 11:43:12 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:05.048 11:43:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@320 -- # e810=() 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@321 -- # x722=() 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@322 -- # mlx=() 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:04:11.622 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:04:11.622 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:04:11.622 Found net devices under 0000:da:00.0: mlx_0_0 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:04:11.622 Found net devices under 0000:da:00.1: mlx_0_1 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:11.622 11:43:18 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@62 -- # uname 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@78 -- # ip= 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_0 up 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:04:11.623 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:11.623 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:04:11.623 altname enp218s0f0np0 00:04:11.623 altname ens818f0np0 00:04:11.623 inet 192.168.100.8/24 scope global mlx_0_0 00:04:11.623 valid_lft forever preferred_lft forever 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@78 -- # ip= 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_1 up 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:04:11.623 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:11.623 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:04:11.623 altname enp218s0f1np1 00:04:11.623 altname ens818f1np1 00:04:11.623 inet 192.168.100.9/24 scope global mlx_0_1 00:04:11.623 valid_lft forever preferred_lft forever 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@450 -- # return 0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:04:11.623 192.168.100.9' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:04:11.623 192.168.100.9' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@485 -- # head -n 1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:04:11.623 192.168.100.9' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@486 -- # head -n 1 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:04:11.623 11:43:18 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:04:11.623 11:43:18 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:04:11.623 11:43:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:11.623 11:43:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:11.623 MallocForNvmf0 00:04:11.623 11:43:18 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:11.623 11:43:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:11.623 MallocForNvmf1 00:04:11.623 11:43:19 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:11.623 11:43:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:11.623 [2024-12-09 11:43:19.251346] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:11.623 [2024-12-09 11:43:19.315591] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23410c0/0x2215a40) succeed. 00:04:11.623 [2024-12-09 11:43:19.327817] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23400b0/0x2295700) succeed. 00:04:11.623 11:43:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:11.623 11:43:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:11.623 11:43:19 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:11.623 11:43:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:11.883 11:43:19 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:11.883 11:43:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:12.142 11:43:19 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:12.142 11:43:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:12.142 [2024-12-09 11:43:20.147409] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:12.142 11:43:20 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:12.142 11:43:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.142 11:43:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.401 11:43:20 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:12.401 11:43:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.401 11:43:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.401 11:43:20 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:12.401 11:43:20 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:12.401 11:43:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:12.401 MallocBdevForConfigChangeCheck 00:04:12.401 11:43:20 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:12.401 11:43:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.401 11:43:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.660 11:43:20 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:12.660 11:43:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.919 11:43:20 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:12.919 INFO: shutting down applications... 00:04:12.919 11:43:20 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:12.919 11:43:20 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:12.919 11:43:20 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:12.919 11:43:20 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:15.456 Calling clear_iscsi_subsystem 00:04:15.456 Calling clear_nvmf_subsystem 00:04:15.456 Calling clear_nbd_subsystem 00:04:15.456 Calling clear_ublk_subsystem 00:04:15.456 Calling clear_vhost_blk_subsystem 00:04:15.456 Calling clear_vhost_scsi_subsystem 00:04:15.456 Calling clear_bdev_subsystem 00:04:15.456 11:43:22 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:15.456 11:43:22 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:15.456 11:43:22 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:15.456 11:43:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.456 11:43:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:15.456 11:43:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:15.456 11:43:23 json_config -- json_config/json_config.sh@352 -- # break 00:04:15.456 11:43:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:15.456 11:43:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:15.456 11:43:23 json_config -- json_config/common.sh@31 -- # local app=target 00:04:15.456 11:43:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:15.456 11:43:23 json_config -- json_config/common.sh@35 -- # [[ -n 3046671 ]] 00:04:15.456 11:43:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3046671 00:04:15.456 11:43:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:15.456 11:43:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.456 11:43:23 json_config -- json_config/common.sh@41 -- # kill -0 3046671 00:04:15.456 11:43:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:16.025 11:43:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:16.025 11:43:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:16.025 11:43:23 json_config -- json_config/common.sh@41 -- # kill -0 3046671 00:04:16.025 11:43:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:16.025 11:43:23 json_config -- json_config/common.sh@43 -- # break 00:04:16.025 11:43:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:16.025 11:43:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:16.025 SPDK target shutdown done 00:04:16.025 11:43:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:16.025 INFO: relaunching applications... 00:04:16.025 11:43:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.025 11:43:23 json_config -- json_config/common.sh@9 -- # local app=target 00:04:16.025 11:43:23 json_config -- json_config/common.sh@10 -- # shift 00:04:16.025 11:43:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:16.025 11:43:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:16.025 11:43:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:16.025 11:43:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.025 11:43:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.025 11:43:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3051423 00:04:16.025 11:43:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:16.025 Waiting for target to run... 00:04:16.025 11:43:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.025 11:43:23 json_config -- json_config/common.sh@25 -- # waitforlisten 3051423 /var/tmp/spdk_tgt.sock 00:04:16.025 11:43:23 json_config -- common/autotest_common.sh@835 -- # '[' -z 3051423 ']' 00:04:16.025 11:43:23 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:16.025 11:43:23 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.025 11:43:23 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:16.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:16.025 11:43:23 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.025 11:43:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.025 [2024-12-09 11:43:23.846167] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:16.025 [2024-12-09 11:43:23.846228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051423 ] 00:04:16.285 [2024-12-09 11:43:24.300275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.544 [2024-12-09 11:43:24.358848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.835 [2024-12-09 11:43:27.421048] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb07f40/0xaa1230) succeed. 00:04:19.835 [2024-12-09 11:43:27.433954] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb0a130/0xb36280) succeed. 00:04:19.835 [2024-12-09 11:43:27.482899] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:20.093 11:43:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.093 11:43:28 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:20.093 11:43:28 json_config -- json_config/common.sh@26 -- # echo '' 00:04:20.093 00:04:20.093 11:43:28 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:20.093 11:43:28 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:20.093 INFO: Checking if target configuration is the same... 00:04:20.093 11:43:28 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:20.093 11:43:28 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.093 11:43:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.093 + '[' 2 -ne 2 ']' 00:04:20.093 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:20.093 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:20.093 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:20.093 +++ basename /dev/fd/62 00:04:20.093 ++ mktemp /tmp/62.XXX 00:04:20.093 + tmp_file_1=/tmp/62.yQU 00:04:20.093 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.093 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.093 + tmp_file_2=/tmp/spdk_tgt_config.json.BNc 00:04:20.093 + ret=0 00:04:20.094 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.662 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.662 + diff -u /tmp/62.yQU /tmp/spdk_tgt_config.json.BNc 00:04:20.662 + echo 'INFO: JSON config files are the same' 00:04:20.662 INFO: JSON config files are the same 00:04:20.662 + rm /tmp/62.yQU /tmp/spdk_tgt_config.json.BNc 00:04:20.662 + exit 0 00:04:20.662 11:43:28 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:20.662 11:43:28 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:20.662 INFO: changing configuration and checking if this can be detected... 00:04:20.662 11:43:28 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.662 11:43:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.662 11:43:28 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:20.662 11:43:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.662 11:43:28 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.662 + '[' 2 -ne 2 ']' 00:04:20.662 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:20.662 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:20.662 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:20.662 +++ basename /dev/fd/62 00:04:20.662 ++ mktemp /tmp/62.XXX 00:04:20.662 + tmp_file_1=/tmp/62.qXf 00:04:20.662 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.662 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.662 + tmp_file_2=/tmp/spdk_tgt_config.json.stU 00:04:20.662 + ret=0 00:04:20.662 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:21.231 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:21.231 + diff -u /tmp/62.qXf /tmp/spdk_tgt_config.json.stU 00:04:21.231 + ret=1 00:04:21.231 + echo '=== Start of file: /tmp/62.qXf ===' 00:04:21.231 + cat /tmp/62.qXf 00:04:21.231 + echo '=== End of file: /tmp/62.qXf ===' 00:04:21.231 + echo '' 00:04:21.231 + echo '=== Start of file: /tmp/spdk_tgt_config.json.stU ===' 00:04:21.231 + cat /tmp/spdk_tgt_config.json.stU 00:04:21.231 + echo '=== End of file: /tmp/spdk_tgt_config.json.stU ===' 00:04:21.231 + echo '' 00:04:21.231 + rm /tmp/62.qXf /tmp/spdk_tgt_config.json.stU 00:04:21.231 + exit 1 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:21.231 INFO: configuration change detected. 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@324 -- # [[ -n 3051423 ]] 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.231 11:43:29 json_config -- json_config/json_config.sh@330 -- # killprocess 3051423 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@954 -- # '[' -z 3051423 ']' 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@958 -- # kill -0 3051423 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@959 -- # uname 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051423 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051423' 00:04:21.231 killing process with pid 3051423 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@973 -- # kill 3051423 00:04:21.231 11:43:29 json_config -- common/autotest_common.sh@978 -- # wait 3051423 00:04:23.769 11:43:31 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.769 11:43:31 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:23.769 11:43:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.769 11:43:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.769 11:43:31 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:23.769 11:43:31 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:23.769 INFO: Success 00:04:23.769 11:43:31 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:23.769 11:43:31 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:23.769 11:43:31 json_config -- nvmf/common.sh@121 -- # sync 00:04:23.769 11:43:31 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:04:23.769 11:43:31 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:04:23.769 11:43:31 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:04:23.769 11:43:31 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:23.769 11:43:31 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:04:23.769 00:04:23.769 real 0m22.799s 00:04:23.769 user 0m24.846s 00:04:23.769 sys 0m7.039s 00:04:23.769 11:43:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.769 11:43:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.769 ************************************ 00:04:23.769 END TEST json_config 00:04:23.769 ************************************ 00:04:23.769 11:43:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:23.769 11:43:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.769 11:43:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.769 11:43:31 -- common/autotest_common.sh@10 -- # set +x 00:04:23.769 ************************************ 00:04:23.769 START TEST json_config_extra_key 00:04:23.769 ************************************ 00:04:23.769 11:43:31 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:23.769 11:43:31 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.769 11:43:31 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.769 11:43:31 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.769 11:43:31 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.769 11:43:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:23.769 11:43:31 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.769 11:43:31 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.769 --rc genhtml_branch_coverage=1 00:04:23.769 --rc genhtml_function_coverage=1 00:04:23.769 --rc genhtml_legend=1 00:04:23.769 --rc geninfo_all_blocks=1 00:04:23.769 --rc geninfo_unexecuted_blocks=1 00:04:23.769 00:04:23.769 ' 00:04:23.769 11:43:31 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.769 --rc genhtml_branch_coverage=1 00:04:23.769 --rc genhtml_function_coverage=1 00:04:23.769 --rc genhtml_legend=1 00:04:23.769 --rc geninfo_all_blocks=1 00:04:23.769 --rc geninfo_unexecuted_blocks=1 00:04:23.769 00:04:23.769 ' 00:04:23.769 11:43:31 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.770 --rc genhtml_branch_coverage=1 00:04:23.770 --rc genhtml_function_coverage=1 00:04:23.770 --rc genhtml_legend=1 00:04:23.770 --rc geninfo_all_blocks=1 00:04:23.770 --rc geninfo_unexecuted_blocks=1 00:04:23.770 00:04:23.770 ' 00:04:23.770 11:43:31 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.770 --rc genhtml_branch_coverage=1 00:04:23.770 --rc genhtml_function_coverage=1 00:04:23.770 --rc genhtml_legend=1 00:04:23.770 --rc geninfo_all_blocks=1 00:04:23.770 --rc geninfo_unexecuted_blocks=1 00:04:23.770 00:04:23.770 ' 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:23.770 11:43:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.770 11:43:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.770 11:43:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.770 11:43:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.770 11:43:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.770 11:43:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.770 11:43:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.770 11:43:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:23.770 11:43:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.770 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.770 11:43:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:23.770 INFO: launching applications... 00:04:23.770 11:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3052922 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.770 Waiting for target to run... 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3052922 /var/tmp/spdk_tgt.sock 00:04:23.770 11:43:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:23.770 11:43:31 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3052922 ']' 00:04:23.770 11:43:31 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.770 11:43:31 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.770 11:43:31 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.770 11:43:31 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.770 11:43:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.770 [2024-12-09 11:43:31.584389] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:23.770 [2024-12-09 11:43:31.584438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052922 ] 00:04:24.030 [2024-12-09 11:43:32.042702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.289 [2024-12-09 11:43:32.098273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.549 11:43:32 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.549 11:43:32 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:24.549 11:43:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:24.549 00:04:24.549 11:43:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:24.549 INFO: shutting down applications... 00:04:24.549 11:43:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:24.549 11:43:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:24.549 11:43:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.549 11:43:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3052922 ]] 00:04:24.549 11:43:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3052922 00:04:24.549 11:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.549 11:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.549 11:43:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3052922 00:04:24.549 11:43:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.118 11:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.118 11:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.118 11:43:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3052922 00:04:25.118 11:43:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:25.118 11:43:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:25.118 11:43:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:25.118 11:43:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:25.118 SPDK target shutdown done 00:04:25.118 11:43:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:25.118 Success 00:04:25.118 00:04:25.118 real 0m1.567s 00:04:25.118 user 0m1.175s 00:04:25.118 sys 0m0.566s 00:04:25.118 11:43:32 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.118 11:43:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:25.118 ************************************ 00:04:25.118 END TEST json_config_extra_key 00:04:25.118 ************************************ 00:04:25.118 11:43:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:25.118 11:43:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.118 11:43:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.118 11:43:32 -- common/autotest_common.sh@10 -- # set +x 00:04:25.118 ************************************ 00:04:25.118 START TEST alias_rpc 00:04:25.118 ************************************ 00:04:25.119 11:43:32 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:25.119 * Looking for test storage... 00:04:25.119 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.119 11:43:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.119 --rc genhtml_branch_coverage=1 00:04:25.119 --rc genhtml_function_coverage=1 00:04:25.119 --rc genhtml_legend=1 00:04:25.119 --rc geninfo_all_blocks=1 00:04:25.119 --rc geninfo_unexecuted_blocks=1 00:04:25.119 00:04:25.119 ' 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.119 --rc genhtml_branch_coverage=1 00:04:25.119 --rc genhtml_function_coverage=1 00:04:25.119 --rc genhtml_legend=1 00:04:25.119 --rc geninfo_all_blocks=1 00:04:25.119 --rc geninfo_unexecuted_blocks=1 00:04:25.119 00:04:25.119 ' 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.119 --rc genhtml_branch_coverage=1 00:04:25.119 --rc genhtml_function_coverage=1 00:04:25.119 --rc genhtml_legend=1 00:04:25.119 --rc geninfo_all_blocks=1 00:04:25.119 --rc geninfo_unexecuted_blocks=1 00:04:25.119 00:04:25.119 ' 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.119 --rc genhtml_branch_coverage=1 00:04:25.119 --rc genhtml_function_coverage=1 00:04:25.119 --rc genhtml_legend=1 00:04:25.119 --rc geninfo_all_blocks=1 00:04:25.119 --rc geninfo_unexecuted_blocks=1 00:04:25.119 00:04:25.119 ' 00:04:25.119 11:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:25.119 11:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3053213 00:04:25.119 11:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3053213 00:04:25.119 11:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3053213 ']' 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.119 11:43:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.378 [2024-12-09 11:43:33.208763] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:25.378 [2024-12-09 11:43:33.208821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053213 ] 00:04:25.378 [2024-12-09 11:43:33.286549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.378 [2024-12-09 11:43:33.325627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:26.317 11:43:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:26.317 11:43:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3053213 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3053213 ']' 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3053213 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053213 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053213' 00:04:26.317 killing process with pid 3053213 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@973 -- # kill 3053213 00:04:26.317 11:43:34 alias_rpc -- common/autotest_common.sh@978 -- # wait 3053213 00:04:26.577 00:04:26.577 real 0m1.604s 00:04:26.577 user 0m1.770s 00:04:26.577 sys 0m0.430s 00:04:26.577 11:43:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.577 11:43:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.577 ************************************ 00:04:26.577 END TEST alias_rpc 00:04:26.577 ************************************ 00:04:26.577 11:43:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:26.577 11:43:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:26.577 11:43:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.577 11:43:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.577 11:43:34 -- common/autotest_common.sh@10 -- # set +x 00:04:26.837 ************************************ 00:04:26.837 START TEST spdkcli_tcp 00:04:26.837 ************************************ 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:26.837 * Looking for test storage... 00:04:26.837 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.837 11:43:34 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.837 --rc genhtml_branch_coverage=1 00:04:26.837 --rc genhtml_function_coverage=1 00:04:26.837 --rc genhtml_legend=1 00:04:26.837 --rc geninfo_all_blocks=1 00:04:26.837 --rc geninfo_unexecuted_blocks=1 00:04:26.837 00:04:26.837 ' 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.837 --rc genhtml_branch_coverage=1 00:04:26.837 --rc genhtml_function_coverage=1 00:04:26.837 --rc genhtml_legend=1 00:04:26.837 --rc geninfo_all_blocks=1 00:04:26.837 --rc geninfo_unexecuted_blocks=1 00:04:26.837 00:04:26.837 ' 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.837 --rc genhtml_branch_coverage=1 00:04:26.837 --rc genhtml_function_coverage=1 00:04:26.837 --rc genhtml_legend=1 00:04:26.837 --rc geninfo_all_blocks=1 00:04:26.837 --rc geninfo_unexecuted_blocks=1 00:04:26.837 00:04:26.837 ' 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.837 --rc genhtml_branch_coverage=1 00:04:26.837 --rc genhtml_function_coverage=1 00:04:26.837 --rc genhtml_legend=1 00:04:26.837 --rc geninfo_all_blocks=1 00:04:26.837 --rc geninfo_unexecuted_blocks=1 00:04:26.837 00:04:26.837 ' 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3053514 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3053514 00:04:26.837 11:43:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3053514 ']' 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.837 11:43:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.097 [2024-12-09 11:43:34.895400] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:27.097 [2024-12-09 11:43:34.895447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053514 ] 00:04:27.097 [2024-12-09 11:43:34.969428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.097 [2024-12-09 11:43:35.012373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.097 [2024-12-09 11:43:35.012374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.357 11:43:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.357 11:43:35 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:27.357 11:43:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3053732 00:04:27.357 11:43:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:27.357 11:43:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:27.357 [ 00:04:27.357 "bdev_malloc_delete", 00:04:27.357 "bdev_malloc_create", 00:04:27.357 "bdev_null_resize", 00:04:27.357 "bdev_null_delete", 00:04:27.357 "bdev_null_create", 00:04:27.357 "bdev_nvme_cuse_unregister", 00:04:27.357 "bdev_nvme_cuse_register", 00:04:27.357 "bdev_opal_new_user", 00:04:27.357 "bdev_opal_set_lock_state", 00:04:27.357 "bdev_opal_delete", 00:04:27.357 "bdev_opal_get_info", 00:04:27.357 "bdev_opal_create", 00:04:27.357 "bdev_nvme_opal_revert", 00:04:27.357 "bdev_nvme_opal_init", 00:04:27.357 "bdev_nvme_send_cmd", 00:04:27.357 "bdev_nvme_set_keys", 00:04:27.357 "bdev_nvme_get_path_iostat", 00:04:27.357 "bdev_nvme_get_mdns_discovery_info", 00:04:27.357 "bdev_nvme_stop_mdns_discovery", 00:04:27.357 "bdev_nvme_start_mdns_discovery", 00:04:27.357 "bdev_nvme_set_multipath_policy", 00:04:27.357 "bdev_nvme_set_preferred_path", 00:04:27.357 "bdev_nvme_get_io_paths", 00:04:27.357 "bdev_nvme_remove_error_injection", 00:04:27.357 "bdev_nvme_add_error_injection", 00:04:27.357 "bdev_nvme_get_discovery_info", 00:04:27.357 "bdev_nvme_stop_discovery", 00:04:27.357 "bdev_nvme_start_discovery", 00:04:27.357 "bdev_nvme_get_controller_health_info", 00:04:27.357 "bdev_nvme_disable_controller", 00:04:27.357 "bdev_nvme_enable_controller", 00:04:27.357 "bdev_nvme_reset_controller", 00:04:27.357 "bdev_nvme_get_transport_statistics", 00:04:27.357 "bdev_nvme_apply_firmware", 00:04:27.357 "bdev_nvme_detach_controller", 00:04:27.357 "bdev_nvme_get_controllers", 00:04:27.357 "bdev_nvme_attach_controller", 00:04:27.357 "bdev_nvme_set_hotplug", 00:04:27.357 "bdev_nvme_set_options", 00:04:27.357 "bdev_passthru_delete", 00:04:27.357 "bdev_passthru_create", 00:04:27.357 "bdev_lvol_set_parent_bdev", 00:04:27.357 "bdev_lvol_set_parent", 00:04:27.357 "bdev_lvol_check_shallow_copy", 00:04:27.357 "bdev_lvol_start_shallow_copy", 00:04:27.357 "bdev_lvol_grow_lvstore", 00:04:27.357 "bdev_lvol_get_lvols", 00:04:27.357 "bdev_lvol_get_lvstores", 00:04:27.357 "bdev_lvol_delete", 00:04:27.357 "bdev_lvol_set_read_only", 00:04:27.357 "bdev_lvol_resize", 00:04:27.357 "bdev_lvol_decouple_parent", 00:04:27.357 "bdev_lvol_inflate", 00:04:27.357 "bdev_lvol_rename", 00:04:27.357 "bdev_lvol_clone_bdev", 00:04:27.357 "bdev_lvol_clone", 00:04:27.357 "bdev_lvol_snapshot", 00:04:27.357 "bdev_lvol_create", 00:04:27.357 "bdev_lvol_delete_lvstore", 00:04:27.357 "bdev_lvol_rename_lvstore", 00:04:27.357 "bdev_lvol_create_lvstore", 00:04:27.357 "bdev_raid_set_options", 00:04:27.357 "bdev_raid_remove_base_bdev", 00:04:27.357 "bdev_raid_add_base_bdev", 00:04:27.357 "bdev_raid_delete", 00:04:27.357 "bdev_raid_create", 00:04:27.357 "bdev_raid_get_bdevs", 00:04:27.357 "bdev_error_inject_error", 00:04:27.357 "bdev_error_delete", 00:04:27.357 "bdev_error_create", 00:04:27.357 "bdev_split_delete", 00:04:27.357 "bdev_split_create", 00:04:27.357 "bdev_delay_delete", 00:04:27.357 "bdev_delay_create", 00:04:27.357 "bdev_delay_update_latency", 00:04:27.357 "bdev_zone_block_delete", 00:04:27.357 "bdev_zone_block_create", 00:04:27.357 "blobfs_create", 00:04:27.357 "blobfs_detect", 00:04:27.357 "blobfs_set_cache_size", 00:04:27.357 "bdev_aio_delete", 00:04:27.357 "bdev_aio_rescan", 00:04:27.357 "bdev_aio_create", 00:04:27.357 "bdev_ftl_set_property", 00:04:27.357 "bdev_ftl_get_properties", 00:04:27.357 "bdev_ftl_get_stats", 00:04:27.357 "bdev_ftl_unmap", 00:04:27.357 "bdev_ftl_unload", 00:04:27.357 "bdev_ftl_delete", 00:04:27.357 "bdev_ftl_load", 00:04:27.357 "bdev_ftl_create", 00:04:27.357 "bdev_virtio_attach_controller", 00:04:27.357 "bdev_virtio_scsi_get_devices", 00:04:27.357 "bdev_virtio_detach_controller", 00:04:27.357 "bdev_virtio_blk_set_hotplug", 00:04:27.357 "bdev_iscsi_delete", 00:04:27.357 "bdev_iscsi_create", 00:04:27.357 "bdev_iscsi_set_options", 00:04:27.357 "accel_error_inject_error", 00:04:27.357 "ioat_scan_accel_module", 00:04:27.357 "dsa_scan_accel_module", 00:04:27.357 "iaa_scan_accel_module", 00:04:27.357 "keyring_file_remove_key", 00:04:27.357 "keyring_file_add_key", 00:04:27.357 "keyring_linux_set_options", 00:04:27.357 "fsdev_aio_delete", 00:04:27.357 "fsdev_aio_create", 00:04:27.357 "iscsi_get_histogram", 00:04:27.357 "iscsi_enable_histogram", 00:04:27.357 "iscsi_set_options", 00:04:27.357 "iscsi_get_auth_groups", 00:04:27.357 "iscsi_auth_group_remove_secret", 00:04:27.357 "iscsi_auth_group_add_secret", 00:04:27.357 "iscsi_delete_auth_group", 00:04:27.357 "iscsi_create_auth_group", 00:04:27.357 "iscsi_set_discovery_auth", 00:04:27.357 "iscsi_get_options", 00:04:27.357 "iscsi_target_node_request_logout", 00:04:27.357 "iscsi_target_node_set_redirect", 00:04:27.357 "iscsi_target_node_set_auth", 00:04:27.357 "iscsi_target_node_add_lun", 00:04:27.357 "iscsi_get_stats", 00:04:27.357 "iscsi_get_connections", 00:04:27.357 "iscsi_portal_group_set_auth", 00:04:27.357 "iscsi_start_portal_group", 00:04:27.357 "iscsi_delete_portal_group", 00:04:27.357 "iscsi_create_portal_group", 00:04:27.357 "iscsi_get_portal_groups", 00:04:27.357 "iscsi_delete_target_node", 00:04:27.357 "iscsi_target_node_remove_pg_ig_maps", 00:04:27.357 "iscsi_target_node_add_pg_ig_maps", 00:04:27.357 "iscsi_create_target_node", 00:04:27.357 "iscsi_get_target_nodes", 00:04:27.357 "iscsi_delete_initiator_group", 00:04:27.357 "iscsi_initiator_group_remove_initiators", 00:04:27.357 "iscsi_initiator_group_add_initiators", 00:04:27.357 "iscsi_create_initiator_group", 00:04:27.357 "iscsi_get_initiator_groups", 00:04:27.357 "nvmf_set_crdt", 00:04:27.357 "nvmf_set_config", 00:04:27.357 "nvmf_set_max_subsystems", 00:04:27.357 "nvmf_stop_mdns_prr", 00:04:27.357 "nvmf_publish_mdns_prr", 00:04:27.357 "nvmf_subsystem_get_listeners", 00:04:27.357 "nvmf_subsystem_get_qpairs", 00:04:27.357 "nvmf_subsystem_get_controllers", 00:04:27.358 "nvmf_get_stats", 00:04:27.358 "nvmf_get_transports", 00:04:27.358 "nvmf_create_transport", 00:04:27.358 "nvmf_get_targets", 00:04:27.358 "nvmf_delete_target", 00:04:27.358 "nvmf_create_target", 00:04:27.358 "nvmf_subsystem_allow_any_host", 00:04:27.358 "nvmf_subsystem_set_keys", 00:04:27.358 "nvmf_subsystem_remove_host", 00:04:27.358 "nvmf_subsystem_add_host", 00:04:27.358 "nvmf_ns_remove_host", 00:04:27.358 "nvmf_ns_add_host", 00:04:27.358 "nvmf_subsystem_remove_ns", 00:04:27.358 "nvmf_subsystem_set_ns_ana_group", 00:04:27.358 "nvmf_subsystem_add_ns", 00:04:27.358 "nvmf_subsystem_listener_set_ana_state", 00:04:27.358 "nvmf_discovery_get_referrals", 00:04:27.358 "nvmf_discovery_remove_referral", 00:04:27.358 "nvmf_discovery_add_referral", 00:04:27.358 "nvmf_subsystem_remove_listener", 00:04:27.358 "nvmf_subsystem_add_listener", 00:04:27.358 "nvmf_delete_subsystem", 00:04:27.358 "nvmf_create_subsystem", 00:04:27.358 "nvmf_get_subsystems", 00:04:27.358 "env_dpdk_get_mem_stats", 00:04:27.358 "nbd_get_disks", 00:04:27.358 "nbd_stop_disk", 00:04:27.358 "nbd_start_disk", 00:04:27.358 "ublk_recover_disk", 00:04:27.358 "ublk_get_disks", 00:04:27.358 "ublk_stop_disk", 00:04:27.358 "ublk_start_disk", 00:04:27.358 "ublk_destroy_target", 00:04:27.358 "ublk_create_target", 00:04:27.358 "virtio_blk_create_transport", 00:04:27.358 "virtio_blk_get_transports", 00:04:27.358 "vhost_controller_set_coalescing", 00:04:27.358 "vhost_get_controllers", 00:04:27.358 "vhost_delete_controller", 00:04:27.358 "vhost_create_blk_controller", 00:04:27.358 "vhost_scsi_controller_remove_target", 00:04:27.358 "vhost_scsi_controller_add_target", 00:04:27.358 "vhost_start_scsi_controller", 00:04:27.358 "vhost_create_scsi_controller", 00:04:27.358 "thread_set_cpumask", 00:04:27.358 "scheduler_set_options", 00:04:27.358 "framework_get_governor", 00:04:27.358 "framework_get_scheduler", 00:04:27.358 "framework_set_scheduler", 00:04:27.358 "framework_get_reactors", 00:04:27.358 "thread_get_io_channels", 00:04:27.358 "thread_get_pollers", 00:04:27.358 "thread_get_stats", 00:04:27.358 "framework_monitor_context_switch", 00:04:27.358 "spdk_kill_instance", 00:04:27.358 "log_enable_timestamps", 00:04:27.358 "log_get_flags", 00:04:27.358 "log_clear_flag", 00:04:27.358 "log_set_flag", 00:04:27.358 "log_get_level", 00:04:27.358 "log_set_level", 00:04:27.358 "log_get_print_level", 00:04:27.358 "log_set_print_level", 00:04:27.358 "framework_enable_cpumask_locks", 00:04:27.358 "framework_disable_cpumask_locks", 00:04:27.358 "framework_wait_init", 00:04:27.358 "framework_start_init", 00:04:27.358 "scsi_get_devices", 00:04:27.358 "bdev_get_histogram", 00:04:27.358 "bdev_enable_histogram", 00:04:27.358 "bdev_set_qos_limit", 00:04:27.358 "bdev_set_qd_sampling_period", 00:04:27.358 "bdev_get_bdevs", 00:04:27.358 "bdev_reset_iostat", 00:04:27.358 "bdev_get_iostat", 00:04:27.358 "bdev_examine", 00:04:27.358 "bdev_wait_for_examine", 00:04:27.358 "bdev_set_options", 00:04:27.358 "accel_get_stats", 00:04:27.358 "accel_set_options", 00:04:27.358 "accel_set_driver", 00:04:27.358 "accel_crypto_key_destroy", 00:04:27.358 "accel_crypto_keys_get", 00:04:27.358 "accel_crypto_key_create", 00:04:27.358 "accel_assign_opc", 00:04:27.358 "accel_get_module_info", 00:04:27.358 "accel_get_opc_assignments", 00:04:27.358 "vmd_rescan", 00:04:27.358 "vmd_remove_device", 00:04:27.358 "vmd_enable", 00:04:27.358 "sock_get_default_impl", 00:04:27.358 "sock_set_default_impl", 00:04:27.358 "sock_impl_set_options", 00:04:27.358 "sock_impl_get_options", 00:04:27.358 "iobuf_get_stats", 00:04:27.358 "iobuf_set_options", 00:04:27.358 "keyring_get_keys", 00:04:27.358 "framework_get_pci_devices", 00:04:27.358 "framework_get_config", 00:04:27.358 "framework_get_subsystems", 00:04:27.358 "fsdev_set_opts", 00:04:27.358 "fsdev_get_opts", 00:04:27.358 "trace_get_info", 00:04:27.358 "trace_get_tpoint_group_mask", 00:04:27.358 "trace_disable_tpoint_group", 00:04:27.358 "trace_enable_tpoint_group", 00:04:27.358 "trace_clear_tpoint_mask", 00:04:27.358 "trace_set_tpoint_mask", 00:04:27.358 "notify_get_notifications", 00:04:27.358 "notify_get_types", 00:04:27.358 "spdk_get_version", 00:04:27.358 "rpc_get_methods" 00:04:27.358 ] 00:04:27.617 11:43:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.617 11:43:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:27.617 11:43:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3053514 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3053514 ']' 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3053514 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053514 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053514' 00:04:27.617 killing process with pid 3053514 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3053514 00:04:27.617 11:43:35 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3053514 00:04:27.877 00:04:27.877 real 0m1.139s 00:04:27.877 user 0m1.912s 00:04:27.877 sys 0m0.425s 00:04:27.877 11:43:35 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.877 11:43:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.877 ************************************ 00:04:27.877 END TEST spdkcli_tcp 00:04:27.877 ************************************ 00:04:27.877 11:43:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.877 11:43:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.877 11:43:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.877 11:43:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.877 ************************************ 00:04:27.877 START TEST dpdk_mem_utility 00:04:27.877 ************************************ 00:04:27.877 11:43:35 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:28.138 * Looking for test storage... 00:04:28.138 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:28.138 11:43:35 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.138 11:43:35 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.138 11:43:35 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.138 11:43:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.138 --rc genhtml_branch_coverage=1 00:04:28.138 --rc genhtml_function_coverage=1 00:04:28.138 --rc genhtml_legend=1 00:04:28.138 --rc geninfo_all_blocks=1 00:04:28.138 --rc geninfo_unexecuted_blocks=1 00:04:28.138 00:04:28.138 ' 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.138 --rc genhtml_branch_coverage=1 00:04:28.138 --rc genhtml_function_coverage=1 00:04:28.138 --rc genhtml_legend=1 00:04:28.138 --rc geninfo_all_blocks=1 00:04:28.138 --rc geninfo_unexecuted_blocks=1 00:04:28.138 00:04:28.138 ' 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.138 --rc genhtml_branch_coverage=1 00:04:28.138 --rc genhtml_function_coverage=1 00:04:28.138 --rc genhtml_legend=1 00:04:28.138 --rc geninfo_all_blocks=1 00:04:28.138 --rc geninfo_unexecuted_blocks=1 00:04:28.138 00:04:28.138 ' 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.138 --rc genhtml_branch_coverage=1 00:04:28.138 --rc genhtml_function_coverage=1 00:04:28.138 --rc genhtml_legend=1 00:04:28.138 --rc geninfo_all_blocks=1 00:04:28.138 --rc geninfo_unexecuted_blocks=1 00:04:28.138 00:04:28.138 ' 00:04:28.138 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:28.138 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3053812 00:04:28.138 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3053812 00:04:28.138 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3053812 ']' 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.138 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.138 [2024-12-09 11:43:36.086254] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:28.139 [2024-12-09 11:43:36.086300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053812 ] 00:04:28.139 [2024-12-09 11:43:36.162790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.398 [2024-12-09 11:43:36.206267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.398 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.398 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:28.398 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.398 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.398 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.398 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.398 { 00:04:28.398 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.398 } 00:04:28.398 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.398 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:28.659 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:28.659 1 heaps totaling size 818.000000 MiB 00:04:28.659 size: 818.000000 MiB heap id: 0 00:04:28.659 end heaps---------- 00:04:28.659 9 mempools totaling size 603.782043 MiB 00:04:28.659 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:28.659 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:28.659 size: 100.555481 MiB name: bdev_io_3053812 00:04:28.659 size: 50.003479 MiB name: msgpool_3053812 00:04:28.659 size: 36.509338 MiB name: fsdev_io_3053812 00:04:28.659 size: 21.763794 MiB name: PDU_Pool 00:04:28.659 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:28.659 size: 4.133484 MiB name: evtpool_3053812 00:04:28.659 size: 0.026123 MiB name: Session_Pool 00:04:28.659 end mempools------- 00:04:28.659 6 memzones totaling size 4.142822 MiB 00:04:28.659 size: 1.000366 MiB name: RG_ring_0_3053812 00:04:28.660 size: 1.000366 MiB name: RG_ring_1_3053812 00:04:28.660 size: 1.000366 MiB name: RG_ring_4_3053812 00:04:28.660 size: 1.000366 MiB name: RG_ring_5_3053812 00:04:28.660 size: 0.125366 MiB name: RG_ring_2_3053812 00:04:28.660 size: 0.015991 MiB name: RG_ring_3_3053812 00:04:28.660 end memzones------- 00:04:28.660 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:28.660 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:28.660 list of free elements. size: 10.852478 MiB 00:04:28.660 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:28.660 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:28.660 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:28.660 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:28.660 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:28.660 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:28.660 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:28.660 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:28.660 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:28.660 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:28.660 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:28.660 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:28.660 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:28.660 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:28.660 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:28.660 list of standard malloc elements. size: 199.218628 MiB 00:04:28.660 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:28.660 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:28.660 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:28.660 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:28.660 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:28.660 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:28.660 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:28.660 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:28.660 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:28.660 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:28.660 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:28.660 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:28.660 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:28.660 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:28.660 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:28.660 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:28.660 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:28.660 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:28.660 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:28.660 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:28.660 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:28.660 list of memzone associated elements. size: 607.928894 MiB 00:04:28.660 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:28.660 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:28.660 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:28.660 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:28.660 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:28.660 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3053812_0 00:04:28.660 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:28.660 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3053812_0 00:04:28.660 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:28.660 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3053812_0 00:04:28.660 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:28.660 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:28.660 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:28.660 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:28.660 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:28.660 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3053812_0 00:04:28.660 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:28.660 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3053812 00:04:28.660 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:28.660 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3053812 00:04:28.660 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:28.660 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:28.660 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:28.660 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:28.660 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:28.660 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:28.660 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:28.660 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:28.660 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:28.660 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3053812 00:04:28.660 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:28.660 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3053812 00:04:28.660 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:28.660 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3053812 00:04:28.660 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:28.660 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3053812 00:04:28.660 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:28.660 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3053812 00:04:28.660 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:28.660 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3053812 00:04:28.660 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:28.660 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:28.660 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:28.660 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:28.660 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:28.660 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:28.660 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:28.660 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3053812 00:04:28.660 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:28.660 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3053812 00:04:28.660 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:28.660 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:28.660 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:28.660 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:28.660 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:28.660 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3053812 00:04:28.660 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:28.660 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:28.660 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:28.660 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3053812 00:04:28.660 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:28.660 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3053812 00:04:28.660 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:28.660 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3053812 00:04:28.660 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:28.660 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:28.660 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:28.660 11:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3053812 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3053812 ']' 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3053812 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053812 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053812' 00:04:28.660 killing process with pid 3053812 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3053812 00:04:28.660 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3053812 00:04:28.927 00:04:28.927 real 0m1.027s 00:04:28.927 user 0m0.968s 00:04:28.927 sys 0m0.410s 00:04:28.927 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.927 11:43:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.927 ************************************ 00:04:28.927 END TEST dpdk_mem_utility 00:04:28.927 ************************************ 00:04:28.927 11:43:36 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:28.927 11:43:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.927 11:43:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.927 11:43:36 -- common/autotest_common.sh@10 -- # set +x 00:04:28.927 ************************************ 00:04:28.927 START TEST event 00:04:28.927 ************************************ 00:04:28.927 11:43:36 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:29.188 * Looking for test storage... 00:04:29.188 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.188 11:43:37 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.188 11:43:37 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.188 11:43:37 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.188 11:43:37 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.188 11:43:37 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.188 11:43:37 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.188 11:43:37 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.188 11:43:37 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.188 11:43:37 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.188 11:43:37 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.188 11:43:37 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.188 11:43:37 event -- scripts/common.sh@344 -- # case "$op" in 00:04:29.188 11:43:37 event -- scripts/common.sh@345 -- # : 1 00:04:29.188 11:43:37 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.188 11:43:37 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.188 11:43:37 event -- scripts/common.sh@365 -- # decimal 1 00:04:29.188 11:43:37 event -- scripts/common.sh@353 -- # local d=1 00:04:29.188 11:43:37 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.188 11:43:37 event -- scripts/common.sh@355 -- # echo 1 00:04:29.188 11:43:37 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.188 11:43:37 event -- scripts/common.sh@366 -- # decimal 2 00:04:29.188 11:43:37 event -- scripts/common.sh@353 -- # local d=2 00:04:29.188 11:43:37 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.188 11:43:37 event -- scripts/common.sh@355 -- # echo 2 00:04:29.188 11:43:37 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.188 11:43:37 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.188 11:43:37 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.188 11:43:37 event -- scripts/common.sh@368 -- # return 0 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.188 --rc genhtml_branch_coverage=1 00:04:29.188 --rc genhtml_function_coverage=1 00:04:29.188 --rc genhtml_legend=1 00:04:29.188 --rc geninfo_all_blocks=1 00:04:29.188 --rc geninfo_unexecuted_blocks=1 00:04:29.188 00:04:29.188 ' 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.188 --rc genhtml_branch_coverage=1 00:04:29.188 --rc genhtml_function_coverage=1 00:04:29.188 --rc genhtml_legend=1 00:04:29.188 --rc geninfo_all_blocks=1 00:04:29.188 --rc geninfo_unexecuted_blocks=1 00:04:29.188 00:04:29.188 ' 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.188 --rc genhtml_branch_coverage=1 00:04:29.188 --rc genhtml_function_coverage=1 00:04:29.188 --rc genhtml_legend=1 00:04:29.188 --rc geninfo_all_blocks=1 00:04:29.188 --rc geninfo_unexecuted_blocks=1 00:04:29.188 00:04:29.188 ' 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.188 --rc genhtml_branch_coverage=1 00:04:29.188 --rc genhtml_function_coverage=1 00:04:29.188 --rc genhtml_legend=1 00:04:29.188 --rc geninfo_all_blocks=1 00:04:29.188 --rc geninfo_unexecuted_blocks=1 00:04:29.188 00:04:29.188 ' 00:04:29.188 11:43:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:29.188 11:43:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.188 11:43:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:29.188 11:43:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.188 11:43:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.188 ************************************ 00:04:29.188 START TEST event_perf 00:04:29.188 ************************************ 00:04:29.188 11:43:37 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.188 Running I/O for 1 seconds...[2024-12-09 11:43:37.188323] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:29.188 [2024-12-09 11:43:37.188391] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054104 ] 00:04:29.448 [2024-12-09 11:43:37.266661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.448 [2024-12-09 11:43:37.309795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.448 [2024-12-09 11:43:37.309903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.448 [2024-12-09 11:43:37.309942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.448 [2024-12-09 11:43:37.309943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:30.385 Running I/O for 1 seconds... 00:04:30.385 lcore 0: 202134 00:04:30.385 lcore 1: 202132 00:04:30.385 lcore 2: 202132 00:04:30.385 lcore 3: 202132 00:04:30.385 done. 00:04:30.385 00:04:30.385 real 0m1.182s 00:04:30.385 user 0m4.098s 00:04:30.385 sys 0m0.081s 00:04:30.385 11:43:38 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.385 11:43:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.385 ************************************ 00:04:30.385 END TEST event_perf 00:04:30.385 ************************************ 00:04:30.385 11:43:38 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.385 11:43:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:30.385 11:43:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.385 11:43:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.385 ************************************ 00:04:30.385 START TEST event_reactor 00:04:30.385 ************************************ 00:04:30.385 11:43:38 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.645 [2024-12-09 11:43:38.442569] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:30.645 [2024-12-09 11:43:38.442641] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054362 ] 00:04:30.645 [2024-12-09 11:43:38.524410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.645 [2024-12-09 11:43:38.565842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.582 test_start 00:04:31.583 oneshot 00:04:31.583 tick 100 00:04:31.583 tick 100 00:04:31.583 tick 250 00:04:31.583 tick 100 00:04:31.583 tick 100 00:04:31.583 tick 100 00:04:31.583 tick 250 00:04:31.583 tick 500 00:04:31.583 tick 100 00:04:31.583 tick 100 00:04:31.583 tick 250 00:04:31.583 tick 100 00:04:31.583 tick 100 00:04:31.583 test_end 00:04:31.583 00:04:31.583 real 0m1.185s 00:04:31.583 user 0m1.100s 00:04:31.583 sys 0m0.080s 00:04:31.583 11:43:39 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.583 11:43:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:31.583 ************************************ 00:04:31.583 END TEST event_reactor 00:04:31.583 ************************************ 00:04:31.842 11:43:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.842 11:43:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:31.842 11:43:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.842 11:43:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.842 ************************************ 00:04:31.842 START TEST event_reactor_perf 00:04:31.842 ************************************ 00:04:31.842 11:43:39 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.842 [2024-12-09 11:43:39.696741] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:31.842 [2024-12-09 11:43:39.696804] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054615 ] 00:04:31.842 [2024-12-09 11:43:39.777260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.842 [2024-12-09 11:43:39.816249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.223 test_start 00:04:33.223 test_end 00:04:33.223 Performance: 500957 events per second 00:04:33.223 00:04:33.223 real 0m1.183s 00:04:33.223 user 0m1.103s 00:04:33.223 sys 0m0.075s 00:04:33.223 11:43:40 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.223 11:43:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.223 ************************************ 00:04:33.223 END TEST event_reactor_perf 00:04:33.223 ************************************ 00:04:33.223 11:43:40 event -- event/event.sh@49 -- # uname -s 00:04:33.223 11:43:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.223 11:43:40 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:33.223 11:43:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.223 11:43:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.223 11:43:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.223 ************************************ 00:04:33.223 START TEST event_scheduler 00:04:33.223 ************************************ 00:04:33.223 11:43:40 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:33.223 * Looking for test storage... 00:04:33.223 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.223 11:43:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.223 --rc genhtml_branch_coverage=1 00:04:33.223 --rc genhtml_function_coverage=1 00:04:33.223 --rc genhtml_legend=1 00:04:33.223 --rc geninfo_all_blocks=1 00:04:33.223 --rc geninfo_unexecuted_blocks=1 00:04:33.223 00:04:33.223 ' 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.223 --rc genhtml_branch_coverage=1 00:04:33.223 --rc genhtml_function_coverage=1 00:04:33.223 --rc genhtml_legend=1 00:04:33.223 --rc geninfo_all_blocks=1 00:04:33.223 --rc geninfo_unexecuted_blocks=1 00:04:33.223 00:04:33.223 ' 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.223 --rc genhtml_branch_coverage=1 00:04:33.223 --rc genhtml_function_coverage=1 00:04:33.223 --rc genhtml_legend=1 00:04:33.223 --rc geninfo_all_blocks=1 00:04:33.223 --rc geninfo_unexecuted_blocks=1 00:04:33.223 00:04:33.223 ' 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.223 --rc genhtml_branch_coverage=1 00:04:33.223 --rc genhtml_function_coverage=1 00:04:33.223 --rc genhtml_legend=1 00:04:33.223 --rc geninfo_all_blocks=1 00:04:33.223 --rc geninfo_unexecuted_blocks=1 00:04:33.223 00:04:33.223 ' 00:04:33.223 11:43:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.223 11:43:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3054896 00:04:33.223 11:43:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.223 11:43:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.223 11:43:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3054896 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3054896 ']' 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.223 11:43:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.223 [2024-12-09 11:43:41.139054] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:33.223 [2024-12-09 11:43:41.139105] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054896 ] 00:04:33.223 [2024-12-09 11:43:41.216833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.223 [2024-12-09 11:43:41.258744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.223 [2024-12-09 11:43:41.258852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.223 [2024-12-09 11:43:41.258905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.223 [2024-12-09 11:43:41.258905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:33.483 11:43:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.483 [2024-12-09 11:43:41.299549] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:33.483 [2024-12-09 11:43:41.299568] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:33.483 [2024-12-09 11:43:41.299578] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:33.483 [2024-12-09 11:43:41.299584] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:33.483 [2024-12-09 11:43:41.299589] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.483 11:43:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.483 [2024-12-09 11:43:41.377818] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.483 11:43:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.483 11:43:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.483 ************************************ 00:04:33.484 START TEST scheduler_create_thread 00:04:33.484 ************************************ 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 2 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 3 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 4 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 5 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 6 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 7 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 8 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 9 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 10 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.484 11:43:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.422 11:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.422 11:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:34.422 11:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.422 11:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.802 11:43:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.802 11:43:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:35.802 11:43:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:35.802 11:43:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.802 11:43:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.182 11:43:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.182 00:04:37.182 real 0m3.380s 00:04:37.182 user 0m0.026s 00:04:37.182 sys 0m0.003s 00:04:37.182 11:43:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.182 11:43:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.182 ************************************ 00:04:37.182 END TEST scheduler_create_thread 00:04:37.182 ************************************ 00:04:37.182 11:43:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:37.182 11:43:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3054896 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3054896 ']' 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3054896 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3054896 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3054896' 00:04:37.183 killing process with pid 3054896 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3054896 00:04:37.183 11:43:44 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3054896 00:04:37.183 [2024-12-09 11:43:45.173910] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:37.444 00:04:37.444 real 0m4.452s 00:04:37.444 user 0m7.796s 00:04:37.444 sys 0m0.366s 00:04:37.444 11:43:45 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.444 11:43:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.444 ************************************ 00:04:37.444 END TEST event_scheduler 00:04:37.444 ************************************ 00:04:37.444 11:43:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:37.444 11:43:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:37.444 11:43:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.444 11:43:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.444 11:43:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.444 ************************************ 00:04:37.444 START TEST app_repeat 00:04:37.444 ************************************ 00:04:37.444 11:43:45 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3055640 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3055640' 00:04:37.444 Process app_repeat pid: 3055640 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:37.444 spdk_app_start Round 0 00:04:37.444 11:43:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3055640 /var/tmp/spdk-nbd.sock 00:04:37.444 11:43:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3055640 ']' 00:04:37.444 11:43:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.444 11:43:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.444 11:43:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.444 11:43:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.444 11:43:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:37.444 [2024-12-09 11:43:45.465911] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:37.444 [2024-12-09 11:43:45.465952] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055640 ] 00:04:37.703 [2024-12-09 11:43:45.534979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.703 [2024-12-09 11:43:45.591246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.703 [2024-12-09 11:43:45.591251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.703 11:43:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.703 11:43:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:37.703 11:43:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.962 Malloc0 00:04:37.962 11:43:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.221 Malloc1 00:04:38.221 11:43:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.221 11:43:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:38.480 /dev/nbd0 00:04:38.480 11:43:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:38.480 11:43:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.480 1+0 records in 00:04:38.480 1+0 records out 00:04:38.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222887 s, 18.4 MB/s 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:38.480 11:43:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:38.480 11:43:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.480 11:43:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.480 11:43:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:38.740 /dev/nbd1 00:04:38.740 11:43:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:38.740 11:43:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.740 1+0 records in 00:04:38.740 1+0 records out 00:04:38.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234575 s, 17.5 MB/s 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:38.740 11:43:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:38.740 11:43:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.740 11:43:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.740 11:43:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.740 11:43:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.740 11:43:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:38.999 { 00:04:38.999 "nbd_device": "/dev/nbd0", 00:04:38.999 "bdev_name": "Malloc0" 00:04:38.999 }, 00:04:38.999 { 00:04:38.999 "nbd_device": "/dev/nbd1", 00:04:38.999 "bdev_name": "Malloc1" 00:04:38.999 } 00:04:38.999 ]' 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:38.999 { 00:04:38.999 "nbd_device": "/dev/nbd0", 00:04:38.999 "bdev_name": "Malloc0" 00:04:38.999 }, 00:04:38.999 { 00:04:38.999 "nbd_device": "/dev/nbd1", 00:04:38.999 "bdev_name": "Malloc1" 00:04:38.999 } 00:04:38.999 ]' 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:38.999 /dev/nbd1' 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:38.999 /dev/nbd1' 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.999 11:43:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.000 256+0 records in 00:04:39.000 256+0 records out 00:04:39.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108412 s, 96.7 MB/s 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.000 256+0 records in 00:04:39.000 256+0 records out 00:04:39.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138655 s, 75.6 MB/s 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.000 256+0 records in 00:04:39.000 256+0 records out 00:04:39.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149806 s, 70.0 MB/s 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.000 11:43:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.259 11:43:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.518 11:43:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:39.777 11:43:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:39.777 11:43:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.037 11:43:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:40.037 [2024-12-09 11:43:48.006750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.037 [2024-12-09 11:43:48.043535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.037 [2024-12-09 11:43:48.043537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.037 [2024-12-09 11:43:48.085042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:40.037 [2024-12-09 11:43:48.085079] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:43.325 11:43:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.325 11:43:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:43.325 spdk_app_start Round 1 00:04:43.325 11:43:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3055640 /var/tmp/spdk-nbd.sock 00:04:43.325 11:43:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3055640 ']' 00:04:43.325 11:43:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.325 11:43:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.326 11:43:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.326 11:43:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.326 11:43:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.326 11:43:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.326 11:43:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:43.326 11:43:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.326 Malloc0 00:04:43.326 11:43:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.585 Malloc1 00:04:43.585 11:43:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.585 11:43:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:43.849 /dev/nbd0 00:04:43.849 11:43:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:43.849 11:43:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.849 1+0 records in 00:04:43.849 1+0 records out 00:04:43.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261918 s, 15.6 MB/s 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.849 11:43:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.849 11:43:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.849 11:43:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.849 11:43:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.109 /dev/nbd1 00:04:44.109 11:43:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.109 11:43:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.109 1+0 records in 00:04:44.109 1+0 records out 00:04:44.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160493 s, 25.5 MB/s 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.109 11:43:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.109 11:43:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.109 11:43:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.109 11:43:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.109 11:43:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.109 11:43:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.368 { 00:04:44.368 "nbd_device": "/dev/nbd0", 00:04:44.368 "bdev_name": "Malloc0" 00:04:44.368 }, 00:04:44.368 { 00:04:44.368 "nbd_device": "/dev/nbd1", 00:04:44.368 "bdev_name": "Malloc1" 00:04:44.368 } 00:04:44.368 ]' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.368 { 00:04:44.368 "nbd_device": "/dev/nbd0", 00:04:44.368 "bdev_name": "Malloc0" 00:04:44.368 }, 00:04:44.368 { 00:04:44.368 "nbd_device": "/dev/nbd1", 00:04:44.368 "bdev_name": "Malloc1" 00:04:44.368 } 00:04:44.368 ]' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:44.368 /dev/nbd1' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:44.368 /dev/nbd1' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:44.368 256+0 records in 00:04:44.368 256+0 records out 00:04:44.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102807 s, 102 MB/s 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:44.368 256+0 records in 00:04:44.368 256+0 records out 00:04:44.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136042 s, 77.1 MB/s 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:44.368 256+0 records in 00:04:44.368 256+0 records out 00:04:44.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147721 s, 71.0 MB/s 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.368 11:43:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.628 11:43:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.887 11:43:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.146 11:43:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.146 11:43:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.146 11:43:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.146 11:43:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.146 11:43:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.146 11:43:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.146 11:43:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.146 11:43:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.146 11:43:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.146 11:43:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:45.146 11:43:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:45.405 [2024-12-09 11:43:53.318617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.405 [2024-12-09 11:43:53.354933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.405 [2024-12-09 11:43:53.354934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.405 [2024-12-09 11:43:53.397013] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:45.405 [2024-12-09 11:43:53.397053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.693 11:43:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:48.693 11:43:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:48.693 spdk_app_start Round 2 00:04:48.693 11:43:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3055640 /var/tmp/spdk-nbd.sock 00:04:48.693 11:43:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3055640 ']' 00:04:48.693 11:43:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.693 11:43:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.693 11:43:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.693 11:43:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.693 11:43:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.693 11:43:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.693 11:43:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:48.693 11:43:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.693 Malloc0 00:04:48.693 11:43:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.952 Malloc1 00:04:48.952 11:43:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.952 11:43:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.953 11:43:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.953 /dev/nbd0 00:04:49.219 11:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.219 11:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.219 1+0 records in 00:04:49.219 1+0 records out 00:04:49.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023663 s, 17.3 MB/s 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.219 11:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.219 11:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.219 11:43:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:49.219 /dev/nbd1 00:04:49.219 11:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.219 11:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.219 11:43:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.479 1+0 records in 00:04:49.479 1+0 records out 00:04:49.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198258 s, 20.7 MB/s 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.479 11:43:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.479 { 00:04:49.479 "nbd_device": "/dev/nbd0", 00:04:49.479 "bdev_name": "Malloc0" 00:04:49.479 }, 00:04:49.479 { 00:04:49.479 "nbd_device": "/dev/nbd1", 00:04:49.479 "bdev_name": "Malloc1" 00:04:49.479 } 00:04:49.479 ]' 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.479 { 00:04:49.479 "nbd_device": "/dev/nbd0", 00:04:49.479 "bdev_name": "Malloc0" 00:04:49.479 }, 00:04:49.479 { 00:04:49.479 "nbd_device": "/dev/nbd1", 00:04:49.479 "bdev_name": "Malloc1" 00:04:49.479 } 00:04:49.479 ]' 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.479 /dev/nbd1' 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.479 /dev/nbd1' 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.479 11:43:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.738 11:43:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.738 11:43:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.738 11:43:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.738 11:43:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.738 11:43:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.739 256+0 records in 00:04:49.739 256+0 records out 00:04:49.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010837 s, 96.8 MB/s 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.739 256+0 records in 00:04:49.739 256+0 records out 00:04:49.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140225 s, 74.8 MB/s 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.739 256+0 records in 00:04:49.739 256+0 records out 00:04:49.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154059 s, 68.1 MB/s 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.739 11:43:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.999 11:43:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.999 11:43:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.258 11:43:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.258 11:43:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.518 11:43:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.777 [2024-12-09 11:43:58.652471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.777 [2024-12-09 11:43:58.688718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.777 [2024-12-09 11:43:58.688719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.777 [2024-12-09 11:43:58.730142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.777 [2024-12-09 11:43:58.730180] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.065 11:44:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3055640 /var/tmp/spdk-nbd.sock 00:04:54.065 11:44:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3055640 ']' 00:04:54.065 11:44:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.065 11:44:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.065 11:44:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.065 11:44:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:54.066 11:44:01 event.app_repeat -- event/event.sh@39 -- # killprocess 3055640 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3055640 ']' 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3055640 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055640 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055640' 00:04:54.066 killing process with pid 3055640 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3055640 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3055640 00:04:54.066 spdk_app_start is called in Round 0. 00:04:54.066 Shutdown signal received, stop current app iteration 00:04:54.066 Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 reinitialization... 00:04:54.066 spdk_app_start is called in Round 1. 00:04:54.066 Shutdown signal received, stop current app iteration 00:04:54.066 Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 reinitialization... 00:04:54.066 spdk_app_start is called in Round 2. 00:04:54.066 Shutdown signal received, stop current app iteration 00:04:54.066 Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 reinitialization... 00:04:54.066 spdk_app_start is called in Round 3. 00:04:54.066 Shutdown signal received, stop current app iteration 00:04:54.066 11:44:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:54.066 11:44:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:54.066 00:04:54.066 real 0m16.461s 00:04:54.066 user 0m36.245s 00:04:54.066 sys 0m2.590s 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.066 11:44:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.066 ************************************ 00:04:54.066 END TEST app_repeat 00:04:54.066 ************************************ 00:04:54.066 11:44:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:54.066 11:44:01 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:54.066 11:44:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.066 11:44:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.066 11:44:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.066 ************************************ 00:04:54.066 START TEST cpu_locks 00:04:54.066 ************************************ 00:04:54.066 11:44:01 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:54.066 * Looking for test storage... 00:04:54.066 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:54.066 11:44:02 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.066 11:44:02 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.066 11:44:02 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.325 11:44:02 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.325 11:44:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.326 11:44:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:54.326 11:44:02 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.326 11:44:02 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.326 --rc genhtml_branch_coverage=1 00:04:54.326 --rc genhtml_function_coverage=1 00:04:54.326 --rc genhtml_legend=1 00:04:54.326 --rc geninfo_all_blocks=1 00:04:54.326 --rc geninfo_unexecuted_blocks=1 00:04:54.326 00:04:54.326 ' 00:04:54.326 11:44:02 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.326 --rc genhtml_branch_coverage=1 00:04:54.326 --rc genhtml_function_coverage=1 00:04:54.326 --rc genhtml_legend=1 00:04:54.326 --rc geninfo_all_blocks=1 00:04:54.326 --rc geninfo_unexecuted_blocks=1 00:04:54.326 00:04:54.326 ' 00:04:54.326 11:44:02 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.326 --rc genhtml_branch_coverage=1 00:04:54.326 --rc genhtml_function_coverage=1 00:04:54.326 --rc genhtml_legend=1 00:04:54.326 --rc geninfo_all_blocks=1 00:04:54.326 --rc geninfo_unexecuted_blocks=1 00:04:54.326 00:04:54.326 ' 00:04:54.326 11:44:02 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.326 --rc genhtml_branch_coverage=1 00:04:54.326 --rc genhtml_function_coverage=1 00:04:54.326 --rc genhtml_legend=1 00:04:54.326 --rc geninfo_all_blocks=1 00:04:54.326 --rc geninfo_unexecuted_blocks=1 00:04:54.326 00:04:54.326 ' 00:04:54.326 11:44:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:54.326 11:44:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:54.326 11:44:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:54.326 11:44:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:54.326 11:44:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.326 11:44:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.326 11:44:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.326 ************************************ 00:04:54.326 START TEST default_locks 00:04:54.326 ************************************ 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3058642 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3058642 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3058642 ']' 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.326 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.326 [2024-12-09 11:44:02.233721] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:54.326 [2024-12-09 11:44:02.233762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058642 ] 00:04:54.326 [2024-12-09 11:44:02.308710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.326 [2024-12-09 11:44:02.347925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.585 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.585 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:54.585 11:44:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3058642 00:04:54.585 11:44:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3058642 00:04:54.585 11:44:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.154 lslocks: write error 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3058642 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3058642 ']' 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3058642 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058642 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058642' 00:04:55.154 killing process with pid 3058642 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3058642 00:04:55.154 11:44:02 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3058642 00:04:55.413 11:44:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3058642 00:04:55.413 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:55.413 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3058642 00:04:55.413 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:55.413 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.413 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3058642 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3058642 ']' 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.414 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3058642) - No such process 00:04:55.414 ERROR: process (pid: 3058642) is no longer running 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:55.414 00:04:55.414 real 0m1.106s 00:04:55.414 user 0m1.067s 00:04:55.414 sys 0m0.499s 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.414 11:44:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.414 ************************************ 00:04:55.414 END TEST default_locks 00:04:55.414 ************************************ 00:04:55.414 11:44:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:55.414 11:44:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.414 11:44:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.414 11:44:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.414 ************************************ 00:04:55.414 START TEST default_locks_via_rpc 00:04:55.414 ************************************ 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3058899 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3058899 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3058899 ']' 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.414 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.414 [2024-12-09 11:44:03.410344] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:55.414 [2024-12-09 11:44:03.410389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058899 ] 00:04:55.673 [2024-12-09 11:44:03.487342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.673 [2024-12-09 11:44:03.524971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3058899 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3058899 00:04:55.932 11:44:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3058899 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3058899 ']' 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3058899 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058899 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058899' 00:04:56.191 killing process with pid 3058899 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3058899 00:04:56.191 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3058899 00:04:56.450 00:04:56.450 real 0m1.061s 00:04:56.450 user 0m1.010s 00:04:56.450 sys 0m0.484s 00:04:56.450 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.450 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.450 ************************************ 00:04:56.450 END TEST default_locks_via_rpc 00:04:56.450 ************************************ 00:04:56.450 11:44:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:56.450 11:44:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.450 11:44:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.450 11:44:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.450 ************************************ 00:04:56.450 START TEST non_locking_app_on_locked_coremask 00:04:56.450 ************************************ 00:04:56.450 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:56.450 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3059155 00:04:56.450 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3059155 /var/tmp/spdk.sock 00:04:56.450 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.450 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3059155 ']' 00:04:56.450 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.451 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.451 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.451 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.451 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.710 [2024-12-09 11:44:04.534242] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:56.710 [2024-12-09 11:44:04.534282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059155 ] 00:04:56.710 [2024-12-09 11:44:04.609250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.710 [2024-12-09 11:44:04.650871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3059158 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3059158 /var/tmp/spdk2.sock 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3059158 ']' 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.969 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.969 [2024-12-09 11:44:04.908087] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:56.969 [2024-12-09 11:44:04.908133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059158 ] 00:04:56.969 [2024-12-09 11:44:04.996534] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.969 [2024-12-09 11:44:04.996557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.227 [2024-12-09 11:44:05.079136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.796 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.796 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:57.796 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3059155 00:04:57.796 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3059155 00:04:57.796 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.364 lslocks: write error 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3059155 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3059155 ']' 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3059155 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3059155 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3059155' 00:04:58.364 killing process with pid 3059155 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3059155 00:04:58.364 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3059155 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3059158 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3059158 ']' 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3059158 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3059158 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3059158' 00:04:58.932 killing process with pid 3059158 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3059158 00:04:58.932 11:44:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3059158 00:04:59.192 00:04:59.192 real 0m2.697s 00:04:59.192 user 0m2.804s 00:04:59.192 sys 0m0.918s 00:04:59.192 11:44:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.192 11:44:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.192 ************************************ 00:04:59.192 END TEST non_locking_app_on_locked_coremask 00:04:59.192 ************************************ 00:04:59.192 11:44:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:59.192 11:44:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.192 11:44:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.192 11:44:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.451 ************************************ 00:04:59.451 START TEST locking_app_on_unlocked_coremask 00:04:59.451 ************************************ 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3059652 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3059652 /var/tmp/spdk.sock 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3059652 ']' 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.451 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.451 [2024-12-09 11:44:07.303563] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:59.451 [2024-12-09 11:44:07.303605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059652 ] 00:04:59.451 [2024-12-09 11:44:07.377781] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.451 [2024-12-09 11:44:07.377807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.451 [2024-12-09 11:44:07.416270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.710 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.710 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:59.710 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:59.710 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3059663 00:04:59.710 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3059663 /var/tmp/spdk2.sock 00:04:59.710 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3059663 ']' 00:04:59.710 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.710 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.711 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.711 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.711 11:44:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.711 [2024-12-09 11:44:07.686016] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:04:59.711 [2024-12-09 11:44:07.686064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059663 ] 00:04:59.970 [2024-12-09 11:44:07.779343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.970 [2024-12-09 11:44:07.859508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.537 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.537 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:00.537 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3059663 00:05:00.537 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3059663 00:05:00.537 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.106 lslocks: write error 00:05:01.106 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3059652 00:05:01.106 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3059652 ']' 00:05:01.106 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3059652 00:05:01.106 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:01.106 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.106 11:44:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3059652 00:05:01.106 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.106 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.106 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3059652' 00:05:01.106 killing process with pid 3059652 00:05:01.106 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3059652 00:05:01.106 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3059652 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3059663 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3059663 ']' 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3059663 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3059663 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3059663' 00:05:01.674 killing process with pid 3059663 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3059663 00:05:01.674 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3059663 00:05:01.933 00:05:01.933 real 0m2.723s 00:05:01.933 user 0m2.860s 00:05:01.933 sys 0m0.896s 00:05:01.933 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.933 11:44:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.933 ************************************ 00:05:01.933 END TEST locking_app_on_unlocked_coremask 00:05:01.933 ************************************ 00:05:02.192 11:44:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:02.192 11:44:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.192 11:44:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.192 11:44:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.192 ************************************ 00:05:02.192 START TEST locking_app_on_locked_coremask 00:05:02.192 ************************************ 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3060151 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3060151 /var/tmp/spdk.sock 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3060151 ']' 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.192 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.192 [2024-12-09 11:44:10.092089] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:02.192 [2024-12-09 11:44:10.092134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060151 ] 00:05:02.192 [2024-12-09 11:44:10.168599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.192 [2024-12-09 11:44:10.209136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3060160 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3060160 /var/tmp/spdk2.sock 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3060160 /var/tmp/spdk2.sock 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3060160 /var/tmp/spdk2.sock 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3060160 ']' 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.451 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.452 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.452 11:44:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.452 [2024-12-09 11:44:10.478381] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:02.452 [2024-12-09 11:44:10.478425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060160 ] 00:05:02.710 [2024-12-09 11:44:10.573362] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3060151 has claimed it. 00:05:02.710 [2024-12-09 11:44:10.573403] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:03.278 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3060160) - No such process 00:05:03.278 ERROR: process (pid: 3060160) is no longer running 00:05:03.278 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.278 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:03.278 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:03.278 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.278 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.278 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.278 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3060151 00:05:03.278 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3060151 00:05:03.278 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.537 lslocks: write error 00:05:03.537 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3060151 00:05:03.537 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3060151 ']' 00:05:03.537 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3060151 00:05:03.537 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:03.537 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.537 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060151 00:05:03.795 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.795 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.795 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060151' 00:05:03.795 killing process with pid 3060151 00:05:03.795 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3060151 00:05:03.795 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3060151 00:05:04.054 00:05:04.054 real 0m1.876s 00:05:04.054 user 0m2.024s 00:05:04.054 sys 0m0.638s 00:05:04.054 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.054 11:44:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.054 ************************************ 00:05:04.054 END TEST locking_app_on_locked_coremask 00:05:04.054 ************************************ 00:05:04.054 11:44:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:04.054 11:44:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.054 11:44:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.054 11:44:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.054 ************************************ 00:05:04.054 START TEST locking_overlapped_coremask 00:05:04.054 ************************************ 00:05:04.054 11:44:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:04.054 11:44:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3060429 00:05:04.054 11:44:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3060429 /var/tmp/spdk.sock 00:05:04.054 11:44:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:04.054 11:44:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3060429 ']' 00:05:04.054 11:44:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.054 11:44:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.054 11:44:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.055 11:44:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.055 11:44:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.055 [2024-12-09 11:44:12.045977] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:04.055 [2024-12-09 11:44:12.046017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060429 ] 00:05:04.055 [2024-12-09 11:44:12.105593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.314 [2024-12-09 11:44:12.151588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.314 [2024-12-09 11:44:12.151697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.314 [2024-12-09 11:44:12.151697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3060646 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3060646 /var/tmp/spdk2.sock 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3060646 /var/tmp/spdk2.sock 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:04.573 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.574 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3060646 /var/tmp/spdk2.sock 00:05:04.574 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3060646 ']' 00:05:04.574 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.574 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.574 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.574 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.574 11:44:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.574 [2024-12-09 11:44:12.431028] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:04.574 [2024-12-09 11:44:12.431079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060646 ] 00:05:04.574 [2024-12-09 11:44:12.526192] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3060429 has claimed it. 00:05:04.574 [2024-12-09 11:44:12.526233] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:05.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3060646) - No such process 00:05:05.142 ERROR: process (pid: 3060646) is no longer running 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3060429 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3060429 ']' 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3060429 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060429 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060429' 00:05:05.142 killing process with pid 3060429 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3060429 00:05:05.142 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3060429 00:05:05.402 00:05:05.402 real 0m1.430s 00:05:05.402 user 0m3.988s 00:05:05.402 sys 0m0.372s 00:05:05.402 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.402 11:44:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.402 ************************************ 00:05:05.402 END TEST locking_overlapped_coremask 00:05:05.402 ************************************ 00:05:05.662 11:44:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:05.662 11:44:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.662 11:44:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.662 11:44:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.662 ************************************ 00:05:05.662 START TEST locking_overlapped_coremask_via_rpc 00:05:05.662 ************************************ 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3060766 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3060766 /var/tmp/spdk.sock 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3060766 ']' 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.662 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.662 [2024-12-09 11:44:13.544560] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:05.662 [2024-12-09 11:44:13.544607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060766 ] 00:05:05.662 [2024-12-09 11:44:13.622638] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.662 [2024-12-09 11:44:13.622664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.662 [2024-12-09 11:44:13.665909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.662 [2024-12-09 11:44:13.665933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.662 [2024-12-09 11:44:13.665933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3060909 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3060909 /var/tmp/spdk2.sock 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3060909 ']' 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.922 11:44:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.922 [2024-12-09 11:44:13.936147] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:05.922 [2024-12-09 11:44:13.936193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060909 ] 00:05:06.181 [2024-12-09 11:44:14.028236] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.181 [2024-12-09 11:44:14.028265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.181 [2024-12-09 11:44:14.115380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.181 [2024-12-09 11:44:14.118854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.181 [2024-12-09 11:44:14.118855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.749 [2024-12-09 11:44:14.774883] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3060766 has claimed it. 00:05:06.749 request: 00:05:06.749 { 00:05:06.749 "method": "framework_enable_cpumask_locks", 00:05:06.749 "req_id": 1 00:05:06.749 } 00:05:06.749 Got JSON-RPC error response 00:05:06.749 response: 00:05:06.749 { 00:05:06.749 "code": -32603, 00:05:06.749 "message": "Failed to claim CPU core: 2" 00:05:06.749 } 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3060766 /var/tmp/spdk.sock 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3060766 ']' 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.749 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.008 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.008 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.008 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3060909 /var/tmp/spdk2.sock 00:05:07.008 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3060909 ']' 00:05:07.008 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.008 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.008 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.008 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.008 11:44:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.266 11:44:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.266 11:44:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.266 11:44:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:07.266 11:44:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:07.266 11:44:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:07.266 11:44:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:07.266 00:05:07.266 real 0m1.723s 00:05:07.266 user 0m0.827s 00:05:07.266 sys 0m0.145s 00:05:07.266 11:44:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.266 11:44:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.266 ************************************ 00:05:07.266 END TEST locking_overlapped_coremask_via_rpc 00:05:07.266 ************************************ 00:05:07.266 11:44:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:07.266 11:44:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3060766 ]] 00:05:07.266 11:44:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3060766 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3060766 ']' 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3060766 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060766 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060766' 00:05:07.266 killing process with pid 3060766 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3060766 00:05:07.266 11:44:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3060766 00:05:07.834 11:44:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3060909 ]] 00:05:07.834 11:44:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3060909 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3060909 ']' 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3060909 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060909 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060909' 00:05:07.834 killing process with pid 3060909 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3060909 00:05:07.834 11:44:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3060909 00:05:08.093 11:44:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.093 11:44:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:08.093 11:44:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3060766 ]] 00:05:08.093 11:44:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3060766 00:05:08.093 11:44:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3060766 ']' 00:05:08.093 11:44:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3060766 00:05:08.093 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3060766) - No such process 00:05:08.093 11:44:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3060766 is not found' 00:05:08.093 Process with pid 3060766 is not found 00:05:08.093 11:44:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3060909 ]] 00:05:08.093 11:44:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3060909 00:05:08.093 11:44:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3060909 ']' 00:05:08.093 11:44:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3060909 00:05:08.093 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3060909) - No such process 00:05:08.093 11:44:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3060909 is not found' 00:05:08.093 Process with pid 3060909 is not found 00:05:08.093 11:44:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.093 00:05:08.093 real 0m13.998s 00:05:08.093 user 0m24.320s 00:05:08.093 sys 0m4.941s 00:05:08.093 11:44:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.093 11:44:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.093 ************************************ 00:05:08.093 END TEST cpu_locks 00:05:08.093 ************************************ 00:05:08.093 00:05:08.093 real 0m39.047s 00:05:08.093 user 1m14.963s 00:05:08.093 sys 0m8.460s 00:05:08.093 11:44:16 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.093 11:44:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.093 ************************************ 00:05:08.093 END TEST event 00:05:08.093 ************************************ 00:05:08.093 11:44:16 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:08.093 11:44:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.093 11:44:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.093 11:44:16 -- common/autotest_common.sh@10 -- # set +x 00:05:08.093 ************************************ 00:05:08.093 START TEST thread 00:05:08.093 ************************************ 00:05:08.093 11:44:16 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:08.352 * Looking for test storage... 00:05:08.352 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.353 11:44:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.353 11:44:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.353 11:44:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.353 11:44:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.353 11:44:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.353 11:44:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.353 11:44:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.353 11:44:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.353 11:44:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.353 11:44:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.353 11:44:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.353 11:44:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:08.353 11:44:16 thread -- scripts/common.sh@345 -- # : 1 00:05:08.353 11:44:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.353 11:44:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.353 11:44:16 thread -- scripts/common.sh@365 -- # decimal 1 00:05:08.353 11:44:16 thread -- scripts/common.sh@353 -- # local d=1 00:05:08.353 11:44:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.353 11:44:16 thread -- scripts/common.sh@355 -- # echo 1 00:05:08.353 11:44:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.353 11:44:16 thread -- scripts/common.sh@366 -- # decimal 2 00:05:08.353 11:44:16 thread -- scripts/common.sh@353 -- # local d=2 00:05:08.353 11:44:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.353 11:44:16 thread -- scripts/common.sh@355 -- # echo 2 00:05:08.353 11:44:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.353 11:44:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.353 11:44:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.353 11:44:16 thread -- scripts/common.sh@368 -- # return 0 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.353 --rc genhtml_branch_coverage=1 00:05:08.353 --rc genhtml_function_coverage=1 00:05:08.353 --rc genhtml_legend=1 00:05:08.353 --rc geninfo_all_blocks=1 00:05:08.353 --rc geninfo_unexecuted_blocks=1 00:05:08.353 00:05:08.353 ' 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.353 --rc genhtml_branch_coverage=1 00:05:08.353 --rc genhtml_function_coverage=1 00:05:08.353 --rc genhtml_legend=1 00:05:08.353 --rc geninfo_all_blocks=1 00:05:08.353 --rc geninfo_unexecuted_blocks=1 00:05:08.353 00:05:08.353 ' 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.353 --rc genhtml_branch_coverage=1 00:05:08.353 --rc genhtml_function_coverage=1 00:05:08.353 --rc genhtml_legend=1 00:05:08.353 --rc geninfo_all_blocks=1 00:05:08.353 --rc geninfo_unexecuted_blocks=1 00:05:08.353 00:05:08.353 ' 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.353 --rc genhtml_branch_coverage=1 00:05:08.353 --rc genhtml_function_coverage=1 00:05:08.353 --rc genhtml_legend=1 00:05:08.353 --rc geninfo_all_blocks=1 00:05:08.353 --rc geninfo_unexecuted_blocks=1 00:05:08.353 00:05:08.353 ' 00:05:08.353 11:44:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.353 11:44:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.353 ************************************ 00:05:08.353 START TEST thread_poller_perf 00:05:08.353 ************************************ 00:05:08.353 11:44:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.353 [2024-12-09 11:44:16.314096] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:08.353 [2024-12-09 11:44:16.314167] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061353 ] 00:05:08.353 [2024-12-09 11:44:16.397664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.612 [2024-12-09 11:44:16.438746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.612 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:09.548 [2024-12-09T10:44:17.601Z] ====================================== 00:05:09.548 [2024-12-09T10:44:17.601Z] busy:2105227392 (cyc) 00:05:09.548 [2024-12-09T10:44:17.601Z] total_run_count: 424000 00:05:09.548 [2024-12-09T10:44:17.601Z] tsc_hz: 2100000000 (cyc) 00:05:09.548 [2024-12-09T10:44:17.601Z] ====================================== 00:05:09.548 [2024-12-09T10:44:17.601Z] poller_cost: 4965 (cyc), 2364 (nsec) 00:05:09.548 00:05:09.548 real 0m1.188s 00:05:09.548 user 0m1.102s 00:05:09.548 sys 0m0.081s 00:05:09.548 11:44:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.548 11:44:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.548 ************************************ 00:05:09.548 END TEST thread_poller_perf 00:05:09.548 ************************************ 00:05:09.548 11:44:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:09.548 11:44:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:09.548 11:44:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.548 11:44:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.548 ************************************ 00:05:09.548 START TEST thread_poller_perf 00:05:09.548 ************************************ 00:05:09.548 11:44:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:09.548 [2024-12-09 11:44:17.575879] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:09.548 [2024-12-09 11:44:17.575951] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061530 ] 00:05:09.808 [2024-12-09 11:44:17.660276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.808 [2024-12-09 11:44:17.704935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.808 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:10.745 [2024-12-09T10:44:18.798Z] ====================================== 00:05:10.745 [2024-12-09T10:44:18.798Z] busy:2101490274 (cyc) 00:05:10.745 [2024-12-09T10:44:18.798Z] total_run_count: 5223000 00:05:10.745 [2024-12-09T10:44:18.798Z] tsc_hz: 2100000000 (cyc) 00:05:10.745 [2024-12-09T10:44:18.798Z] ====================================== 00:05:10.745 [2024-12-09T10:44:18.798Z] poller_cost: 402 (cyc), 191 (nsec) 00:05:10.745 00:05:10.745 real 0m1.189s 00:05:10.745 user 0m1.104s 00:05:10.745 sys 0m0.081s 00:05:10.745 11:44:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.745 11:44:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.745 ************************************ 00:05:10.745 END TEST thread_poller_perf 00:05:10.745 ************************************ 00:05:10.745 11:44:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:10.745 00:05:10.745 real 0m2.697s 00:05:10.745 user 0m2.375s 00:05:10.745 sys 0m0.337s 00:05:10.745 11:44:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.745 11:44:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.745 ************************************ 00:05:10.745 END TEST thread 00:05:10.745 ************************************ 00:05:11.005 11:44:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:11.005 11:44:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:11.005 11:44:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.005 11:44:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.005 11:44:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.005 ************************************ 00:05:11.005 START TEST app_cmdline 00:05:11.005 ************************************ 00:05:11.005 11:44:18 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:11.005 * Looking for test storage... 00:05:11.005 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:11.005 11:44:18 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.005 11:44:18 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.005 11:44:18 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.005 11:44:19 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.005 11:44:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:11.005 11:44:19 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.005 11:44:19 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.005 --rc genhtml_branch_coverage=1 00:05:11.005 --rc genhtml_function_coverage=1 00:05:11.005 --rc genhtml_legend=1 00:05:11.005 --rc geninfo_all_blocks=1 00:05:11.005 --rc geninfo_unexecuted_blocks=1 00:05:11.005 00:05:11.005 ' 00:05:11.005 11:44:19 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.006 --rc genhtml_branch_coverage=1 00:05:11.006 --rc genhtml_function_coverage=1 00:05:11.006 --rc genhtml_legend=1 00:05:11.006 --rc geninfo_all_blocks=1 00:05:11.006 --rc geninfo_unexecuted_blocks=1 00:05:11.006 00:05:11.006 ' 00:05:11.006 11:44:19 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.006 --rc genhtml_branch_coverage=1 00:05:11.006 --rc genhtml_function_coverage=1 00:05:11.006 --rc genhtml_legend=1 00:05:11.006 --rc geninfo_all_blocks=1 00:05:11.006 --rc geninfo_unexecuted_blocks=1 00:05:11.006 00:05:11.006 ' 00:05:11.006 11:44:19 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.006 --rc genhtml_branch_coverage=1 00:05:11.006 --rc genhtml_function_coverage=1 00:05:11.006 --rc genhtml_legend=1 00:05:11.006 --rc geninfo_all_blocks=1 00:05:11.006 --rc geninfo_unexecuted_blocks=1 00:05:11.006 00:05:11.006 ' 00:05:11.006 11:44:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:11.006 11:44:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3061868 00:05:11.006 11:44:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3061868 00:05:11.006 11:44:19 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:11.006 11:44:19 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3061868 ']' 00:05:11.006 11:44:19 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.006 11:44:19 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.006 11:44:19 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.006 11:44:19 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.006 11:44:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.265 [2024-12-09 11:44:19.076697] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:11.265 [2024-12-09 11:44:19.076746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061868 ] 00:05:11.265 [2024-12-09 11:44:19.155379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.265 [2024-12-09 11:44:19.197336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.525 11:44:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.525 11:44:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:11.525 11:44:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:11.525 { 00:05:11.525 "version": "SPDK v25.01-pre git sha1 3fe025922", 00:05:11.525 "fields": { 00:05:11.525 "major": 25, 00:05:11.525 "minor": 1, 00:05:11.525 "patch": 0, 00:05:11.525 "suffix": "-pre", 00:05:11.525 "commit": "3fe025922" 00:05:11.525 } 00:05:11.525 } 00:05:11.525 11:44:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:11.525 11:44:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:11.525 11:44:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:11.525 11:44:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:11.525 11:44:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:11.525 11:44:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:11.525 11:44:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.525 11:44:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:11.525 11:44:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.784 11:44:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:11.784 11:44:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:11.784 11:44:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:11.784 request: 00:05:11.784 { 00:05:11.784 "method": "env_dpdk_get_mem_stats", 00:05:11.784 "req_id": 1 00:05:11.784 } 00:05:11.784 Got JSON-RPC error response 00:05:11.784 response: 00:05:11.784 { 00:05:11.784 "code": -32601, 00:05:11.784 "message": "Method not found" 00:05:11.784 } 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.784 11:44:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3061868 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3061868 ']' 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3061868 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.784 11:44:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061868 00:05:12.042 11:44:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.042 11:44:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.042 11:44:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061868' 00:05:12.042 killing process with pid 3061868 00:05:12.042 11:44:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 3061868 00:05:12.042 11:44:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 3061868 00:05:12.301 00:05:12.301 real 0m1.314s 00:05:12.301 user 0m1.514s 00:05:12.301 sys 0m0.440s 00:05:12.301 11:44:20 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.301 11:44:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:12.301 ************************************ 00:05:12.301 END TEST app_cmdline 00:05:12.301 ************************************ 00:05:12.301 11:44:20 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:12.301 11:44:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.301 11:44:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.301 11:44:20 -- common/autotest_common.sh@10 -- # set +x 00:05:12.301 ************************************ 00:05:12.301 START TEST version 00:05:12.301 ************************************ 00:05:12.301 11:44:20 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:12.301 * Looking for test storage... 00:05:12.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:12.301 11:44:20 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.301 11:44:20 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.301 11:44:20 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.560 11:44:20 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.560 11:44:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.560 11:44:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.560 11:44:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.560 11:44:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.560 11:44:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.560 11:44:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.560 11:44:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.560 11:44:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.560 11:44:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.560 11:44:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.560 11:44:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.560 11:44:20 version -- scripts/common.sh@344 -- # case "$op" in 00:05:12.560 11:44:20 version -- scripts/common.sh@345 -- # : 1 00:05:12.560 11:44:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.560 11:44:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.560 11:44:20 version -- scripts/common.sh@365 -- # decimal 1 00:05:12.560 11:44:20 version -- scripts/common.sh@353 -- # local d=1 00:05:12.560 11:44:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.560 11:44:20 version -- scripts/common.sh@355 -- # echo 1 00:05:12.560 11:44:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.560 11:44:20 version -- scripts/common.sh@366 -- # decimal 2 00:05:12.560 11:44:20 version -- scripts/common.sh@353 -- # local d=2 00:05:12.560 11:44:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.560 11:44:20 version -- scripts/common.sh@355 -- # echo 2 00:05:12.560 11:44:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.560 11:44:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.560 11:44:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.560 11:44:20 version -- scripts/common.sh@368 -- # return 0 00:05:12.560 11:44:20 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.560 11:44:20 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.560 --rc genhtml_branch_coverage=1 00:05:12.560 --rc genhtml_function_coverage=1 00:05:12.560 --rc genhtml_legend=1 00:05:12.560 --rc geninfo_all_blocks=1 00:05:12.560 --rc geninfo_unexecuted_blocks=1 00:05:12.560 00:05:12.560 ' 00:05:12.560 11:44:20 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.560 --rc genhtml_branch_coverage=1 00:05:12.560 --rc genhtml_function_coverage=1 00:05:12.560 --rc genhtml_legend=1 00:05:12.560 --rc geninfo_all_blocks=1 00:05:12.560 --rc geninfo_unexecuted_blocks=1 00:05:12.560 00:05:12.560 ' 00:05:12.560 11:44:20 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.560 --rc genhtml_branch_coverage=1 00:05:12.560 --rc genhtml_function_coverage=1 00:05:12.560 --rc genhtml_legend=1 00:05:12.560 --rc geninfo_all_blocks=1 00:05:12.560 --rc geninfo_unexecuted_blocks=1 00:05:12.560 00:05:12.560 ' 00:05:12.560 11:44:20 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.560 --rc genhtml_branch_coverage=1 00:05:12.560 --rc genhtml_function_coverage=1 00:05:12.560 --rc genhtml_legend=1 00:05:12.560 --rc geninfo_all_blocks=1 00:05:12.560 --rc geninfo_unexecuted_blocks=1 00:05:12.560 00:05:12.560 ' 00:05:12.560 11:44:20 version -- app/version.sh@17 -- # get_header_version major 00:05:12.560 11:44:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:12.560 11:44:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.560 11:44:20 version -- app/version.sh@14 -- # cut -f2 00:05:12.560 11:44:20 version -- app/version.sh@17 -- # major=25 00:05:12.560 11:44:20 version -- app/version.sh@18 -- # get_header_version minor 00:05:12.560 11:44:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:12.560 11:44:20 version -- app/version.sh@14 -- # cut -f2 00:05:12.560 11:44:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.560 11:44:20 version -- app/version.sh@18 -- # minor=1 00:05:12.560 11:44:20 version -- app/version.sh@19 -- # get_header_version patch 00:05:12.560 11:44:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:12.560 11:44:20 version -- app/version.sh@14 -- # cut -f2 00:05:12.560 11:44:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.560 11:44:20 version -- app/version.sh@19 -- # patch=0 00:05:12.560 11:44:20 version -- app/version.sh@20 -- # get_header_version suffix 00:05:12.561 11:44:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:12.561 11:44:20 version -- app/version.sh@14 -- # cut -f2 00:05:12.561 11:44:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.561 11:44:20 version -- app/version.sh@20 -- # suffix=-pre 00:05:12.561 11:44:20 version -- app/version.sh@22 -- # version=25.1 00:05:12.561 11:44:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:12.561 11:44:20 version -- app/version.sh@28 -- # version=25.1rc0 00:05:12.561 11:44:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:12.561 11:44:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:12.561 11:44:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:12.561 11:44:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:12.561 00:05:12.561 real 0m0.238s 00:05:12.561 user 0m0.144s 00:05:12.561 sys 0m0.136s 00:05:12.561 11:44:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.561 11:44:20 version -- common/autotest_common.sh@10 -- # set +x 00:05:12.561 ************************************ 00:05:12.561 END TEST version 00:05:12.561 ************************************ 00:05:12.561 11:44:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:12.561 11:44:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:12.561 11:44:20 -- spdk/autotest.sh@194 -- # uname -s 00:05:12.561 11:44:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:12.561 11:44:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:12.561 11:44:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:12.561 11:44:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:12.561 11:44:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:12.561 11:44:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:12.561 11:44:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.561 11:44:20 -- common/autotest_common.sh@10 -- # set +x 00:05:12.561 11:44:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:12.561 11:44:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:12.561 11:44:20 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:12.561 11:44:20 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:12.561 11:44:20 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:05:12.561 11:44:20 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:12.561 11:44:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:12.561 11:44:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.561 11:44:20 -- common/autotest_common.sh@10 -- # set +x 00:05:12.561 ************************************ 00:05:12.561 START TEST nvmf_rdma 00:05:12.561 ************************************ 00:05:12.561 11:44:20 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:12.820 * Looking for test storage... 00:05:12.820 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:12.820 11:44:20 nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.820 11:44:20 nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.820 11:44:20 nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.820 11:44:20 nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.820 11:44:20 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.821 11:44:20 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:05:12.821 11:44:20 nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.821 11:44:20 nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.821 --rc genhtml_branch_coverage=1 00:05:12.821 --rc genhtml_function_coverage=1 00:05:12.821 --rc genhtml_legend=1 00:05:12.821 --rc geninfo_all_blocks=1 00:05:12.821 --rc geninfo_unexecuted_blocks=1 00:05:12.821 00:05:12.821 ' 00:05:12.821 11:44:20 nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.821 --rc genhtml_branch_coverage=1 00:05:12.821 --rc genhtml_function_coverage=1 00:05:12.821 --rc genhtml_legend=1 00:05:12.821 --rc geninfo_all_blocks=1 00:05:12.821 --rc geninfo_unexecuted_blocks=1 00:05:12.821 00:05:12.821 ' 00:05:12.821 11:44:20 nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.821 --rc genhtml_branch_coverage=1 00:05:12.821 --rc genhtml_function_coverage=1 00:05:12.821 --rc genhtml_legend=1 00:05:12.821 --rc geninfo_all_blocks=1 00:05:12.821 --rc geninfo_unexecuted_blocks=1 00:05:12.821 00:05:12.821 ' 00:05:12.821 11:44:20 nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.821 --rc genhtml_branch_coverage=1 00:05:12.821 --rc genhtml_function_coverage=1 00:05:12.821 --rc genhtml_legend=1 00:05:12.821 --rc geninfo_all_blocks=1 00:05:12.821 --rc geninfo_unexecuted_blocks=1 00:05:12.821 00:05:12.821 ' 00:05:12.821 11:44:20 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:05:12.821 11:44:20 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:12.821 11:44:20 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:12.821 11:44:20 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:12.821 11:44:20 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.821 11:44:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:12.821 ************************************ 00:05:12.821 START TEST nvmf_target_core 00:05:12.821 ************************************ 00:05:12.821 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:13.081 * Looking for test storage... 00:05:13.081 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.081 --rc genhtml_branch_coverage=1 00:05:13.081 --rc genhtml_function_coverage=1 00:05:13.081 --rc genhtml_legend=1 00:05:13.081 --rc geninfo_all_blocks=1 00:05:13.081 --rc geninfo_unexecuted_blocks=1 00:05:13.081 00:05:13.081 ' 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.081 --rc genhtml_branch_coverage=1 00:05:13.081 --rc genhtml_function_coverage=1 00:05:13.081 --rc genhtml_legend=1 00:05:13.081 --rc geninfo_all_blocks=1 00:05:13.081 --rc geninfo_unexecuted_blocks=1 00:05:13.081 00:05:13.081 ' 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.081 --rc genhtml_branch_coverage=1 00:05:13.081 --rc genhtml_function_coverage=1 00:05:13.081 --rc genhtml_legend=1 00:05:13.081 --rc geninfo_all_blocks=1 00:05:13.081 --rc geninfo_unexecuted_blocks=1 00:05:13.081 00:05:13.081 ' 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.081 --rc genhtml_branch_coverage=1 00:05:13.081 --rc genhtml_function_coverage=1 00:05:13.081 --rc genhtml_legend=1 00:05:13.081 --rc geninfo_all_blocks=1 00:05:13.081 --rc geninfo_unexecuted_blocks=1 00:05:13.081 00:05:13.081 ' 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.081 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.082 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.082 11:44:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:13.082 ************************************ 00:05:13.082 START TEST nvmf_abort 00:05:13.082 ************************************ 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:13.082 * Looking for test storage... 00:05:13.082 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.082 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.342 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.343 --rc genhtml_branch_coverage=1 00:05:13.343 --rc genhtml_function_coverage=1 00:05:13.343 --rc genhtml_legend=1 00:05:13.343 --rc geninfo_all_blocks=1 00:05:13.343 --rc geninfo_unexecuted_blocks=1 00:05:13.343 00:05:13.343 ' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.343 --rc genhtml_branch_coverage=1 00:05:13.343 --rc genhtml_function_coverage=1 00:05:13.343 --rc genhtml_legend=1 00:05:13.343 --rc geninfo_all_blocks=1 00:05:13.343 --rc geninfo_unexecuted_blocks=1 00:05:13.343 00:05:13.343 ' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.343 --rc genhtml_branch_coverage=1 00:05:13.343 --rc genhtml_function_coverage=1 00:05:13.343 --rc genhtml_legend=1 00:05:13.343 --rc geninfo_all_blocks=1 00:05:13.343 --rc geninfo_unexecuted_blocks=1 00:05:13.343 00:05:13.343 ' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.343 --rc genhtml_branch_coverage=1 00:05:13.343 --rc genhtml_function_coverage=1 00:05:13.343 --rc genhtml_legend=1 00:05:13.343 --rc geninfo_all_blocks=1 00:05:13.343 --rc geninfo_unexecuted_blocks=1 00:05:13.343 00:05:13.343 ' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:13.343 11:44:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.914 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:19.914 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:19.914 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:19.914 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:19.914 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:19.914 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:19.914 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:19.914 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:05:19.915 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:05:19.915 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:05:19.915 Found net devices under 0000:da:00.0: mlx_0_0 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:05:19.915 Found net devices under 0000:da:00.1: mlx_0_1 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:19.915 11:44:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:19.915 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:19.915 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:05:19.915 altname enp218s0f0np0 00:05:19.915 altname ens818f0np0 00:05:19.915 inet 192.168.100.8/24 scope global mlx_0_0 00:05:19.915 valid_lft forever preferred_lft forever 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:19.915 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:19.916 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:19.916 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:05:19.916 altname enp218s0f1np1 00:05:19.916 altname ens818f1np1 00:05:19.916 inet 192.168.100.9/24 scope global mlx_0_1 00:05:19.916 valid_lft forever preferred_lft forever 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:19.916 192.168.100.9' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:19.916 192.168.100.9' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:19.916 192.168.100.9' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3065503 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3065503 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3065503 ']' 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.916 11:44:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.916 [2024-12-09 11:44:27.189912] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:19.916 [2024-12-09 11:44:27.189970] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:19.916 [2024-12-09 11:44:27.269828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.916 [2024-12-09 11:44:27.311583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:19.916 [2024-12-09 11:44:27.311619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:19.916 [2024-12-09 11:44:27.311626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:19.916 [2024-12-09 11:44:27.311632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:19.916 [2024-12-09 11:44:27.311637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:19.916 [2024-12-09 11:44:27.313101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.916 [2024-12-09 11:44:27.313206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.916 [2024-12-09 11:44:27.313207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.175 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.175 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:20.175 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:20.175 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.175 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.175 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:20.175 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:05:20.175 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.175 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.175 [2024-12-09 11:44:28.088567] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x191d080/0x1921570) succeed. 00:05:20.175 [2024-12-09 11:44:28.108666] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x191e670/0x1962c10) succeed. 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.434 Malloc0 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.434 Delay0 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.434 [2024-12-09 11:44:28.288507] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.434 11:44:28 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:20.434 [2024-12-09 11:44:28.414970] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:22.968 Initializing NVMe Controllers 00:05:22.968 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:05:22.968 controller IO queue size 128 less than required 00:05:22.968 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:22.968 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:22.968 Initialization complete. Launching workers. 00:05:22.968 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42958 00:05:22.968 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43019, failed to submit 62 00:05:22.968 success 42959, unsuccessful 60, failed 0 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:22.968 rmmod nvme_rdma 00:05:22.968 rmmod nvme_fabrics 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3065503 ']' 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3065503 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3065503 ']' 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3065503 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3065503 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3065503' 00:05:22.968 killing process with pid 3065503 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3065503 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3065503 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:05:22.968 00:05:22.968 real 0m9.850s 00:05:22.968 user 0m14.622s 00:05:22.968 sys 0m4.909s 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.968 ************************************ 00:05:22.968 END TEST nvmf_abort 00:05:22.968 ************************************ 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:22.968 ************************************ 00:05:22.968 START TEST nvmf_ns_hotplug_stress 00:05:22.968 ************************************ 00:05:22.968 11:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:23.228 * Looking for test storage... 00:05:23.228 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.228 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.228 --rc genhtml_branch_coverage=1 00:05:23.229 --rc genhtml_function_coverage=1 00:05:23.229 --rc genhtml_legend=1 00:05:23.229 --rc geninfo_all_blocks=1 00:05:23.229 --rc geninfo_unexecuted_blocks=1 00:05:23.229 00:05:23.229 ' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.229 --rc genhtml_branch_coverage=1 00:05:23.229 --rc genhtml_function_coverage=1 00:05:23.229 --rc genhtml_legend=1 00:05:23.229 --rc geninfo_all_blocks=1 00:05:23.229 --rc geninfo_unexecuted_blocks=1 00:05:23.229 00:05:23.229 ' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.229 --rc genhtml_branch_coverage=1 00:05:23.229 --rc genhtml_function_coverage=1 00:05:23.229 --rc genhtml_legend=1 00:05:23.229 --rc geninfo_all_blocks=1 00:05:23.229 --rc geninfo_unexecuted_blocks=1 00:05:23.229 00:05:23.229 ' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.229 --rc genhtml_branch_coverage=1 00:05:23.229 --rc genhtml_function_coverage=1 00:05:23.229 --rc genhtml_legend=1 00:05:23.229 --rc geninfo_all_blocks=1 00:05:23.229 --rc geninfo_unexecuted_blocks=1 00:05:23.229 00:05:23.229 ' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.229 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:23.229 11:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:05:29.802 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:05:29.802 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:05:29.802 Found net devices under 0000:da:00.0: mlx_0_0 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:05:29.802 Found net devices under 0000:da:00.1: mlx_0_1 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:29.802 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:29.803 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:29.803 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:05:29.803 altname enp218s0f0np0 00:05:29.803 altname ens818f0np0 00:05:29.803 inet 192.168.100.8/24 scope global mlx_0_0 00:05:29.803 valid_lft forever preferred_lft forever 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:29.803 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:29.803 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:05:29.803 altname enp218s0f1np1 00:05:29.803 altname ens818f1np1 00:05:29.803 inet 192.168.100.9/24 scope global mlx_0_1 00:05:29.803 valid_lft forever preferred_lft forever 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:29.803 11:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:29.803 192.168.100.9' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:29.803 192.168.100.9' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:29.803 192.168.100.9' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3069300 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3069300 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3069300 ']' 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.803 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.803 [2024-12-09 11:44:37.144803] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:05:29.803 [2024-12-09 11:44:37.144855] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:29.803 [2024-12-09 11:44:37.219929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.803 [2024-12-09 11:44:37.261218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:29.803 [2024-12-09 11:44:37.261254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:29.803 [2024-12-09 11:44:37.261261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.804 [2024-12-09 11:44:37.261268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.804 [2024-12-09 11:44:37.261273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:29.804 [2024-12-09 11:44:37.262654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.804 [2024-12-09 11:44:37.262683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.804 [2024-12-09 11:44:37.262684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.804 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.804 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:29.804 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:29.804 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.804 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.804 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:29.804 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:29.804 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:05:29.804 [2024-12-09 11:44:37.591148] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1625080/0x1629570) succeed. 00:05:29.804 [2024-12-09 11:44:37.602114] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1626670/0x166ac10) succeed. 00:05:29.804 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:30.063 11:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:30.063 [2024-12-09 11:44:38.072471] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:30.063 11:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:30.321 11:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:30.580 Malloc0 00:05:30.580 11:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:30.838 Delay0 00:05:30.838 11:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.097 11:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:31.097 NULL1 00:05:31.097 11:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:31.356 11:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:31.356 11:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3069773 00:05:31.356 11:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:31.356 11:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.733 Read completed with error (sct=0, sc=11) 00:05:32.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.733 11:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.733 11:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:32.733 11:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:32.992 true 00:05:32.992 11:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:32.992 11:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.926 11:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.926 11:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:33.926 11:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:34.184 true 00:05:34.184 11:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:34.184 11:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.121 11:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.121 11:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:35.121 11:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:35.380 true 00:05:35.380 11:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:35.380 11:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.317 11:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.317 11:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:36.317 11:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:36.575 true 00:05:36.575 11:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:36.575 11:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.512 11:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.512 11:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:37.512 11:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:37.770 true 00:05:37.770 11:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:37.770 11:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.707 11:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.966 11:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:38.966 11:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:39.224 true 00:05:39.224 11:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:39.224 11:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.160 11:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.160 11:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:40.160 11:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:40.419 true 00:05:40.419 11:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:40.419 11:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.355 11:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.355 11:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:41.355 11:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:41.613 true 00:05:41.613 11:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:41.613 11:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.549 11:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.549 11:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:42.549 11:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:42.808 true 00:05:42.808 11:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:42.808 11:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.745 11:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.745 11:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:43.745 11:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:44.004 true 00:05:44.005 11:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:44.005 11:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.939 11:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.939 11:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:44.939 11:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:45.198 true 00:05:45.198 11:44:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:45.198 11:44:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.134 11:44:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.134 11:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:46.134 11:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:46.392 true 00:05:46.392 11:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:46.392 11:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.329 11:44:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.329 11:44:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:47.329 11:44:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:47.587 true 00:05:47.587 11:44:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:47.587 11:44:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.523 11:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.783 11:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:48.783 11:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:48.783 true 00:05:48.783 11:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:48.783 11:44:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.720 11:44:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.979 11:44:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:49.979 11:44:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:49.979 true 00:05:49.979 11:44:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:49.979 11:44:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.915 11:44:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.174 11:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:51.174 11:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:51.174 true 00:05:51.432 11:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:51.432 11:44:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.999 11:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.258 11:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:52.258 11:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:52.517 true 00:05:52.517 11:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:52.517 11:45:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.453 11:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.453 11:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:53.453 11:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:53.712 true 00:05:53.712 11:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:53.712 11:45:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.648 11:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.649 11:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:54.649 11:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:54.907 true 00:05:54.907 11:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:54.907 11:45:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.843 11:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.843 11:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:55.843 11:45:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:56.101 true 00:05:56.101 11:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:56.101 11:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.038 11:45:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.296 11:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:57.296 11:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:57.296 true 00:05:57.296 11:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:57.296 11:45:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.233 11:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.492 11:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:58.492 11:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:58.492 true 00:05:58.492 11:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:58.492 11:45:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.428 11:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.687 11:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:59.687 11:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:59.945 true 00:05:59.945 11:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:05:59.945 11:45:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.771 11:45:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.771 11:45:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:00.771 11:45:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:01.030 true 00:06:01.030 11:45:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:06:01.030 11:45:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.965 11:45:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.249 11:45:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:02.249 11:45:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:02.249 true 00:06:02.249 11:45:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:06:02.249 11:45:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.508 11:45:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.766 11:45:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:02.766 11:45:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:02.766 true 00:06:02.766 11:45:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:06:02.766 11:45:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.025 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.283 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:03.283 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:03.542 true 00:06:03.542 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:06:03.542 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.542 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.801 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:03.801 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:03.801 Initializing NVMe Controllers 00:06:03.801 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:03.801 Controller IO queue size 128, less than required. 00:06:03.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:03.801 Controller IO queue size 128, less than required. 00:06:03.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:03.801 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:03.801 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:03.801 Initialization complete. Launching workers. 00:06:03.801 ======================================================== 00:06:03.801 Latency(us) 00:06:03.801 Device Information : IOPS MiB/s Average min max 00:06:03.801 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6061.77 2.96 18575.58 940.97 1138037.19 00:06:03.801 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 32244.63 15.74 3969.44 1999.64 291087.56 00:06:03.801 ======================================================== 00:06:03.801 Total : 38306.40 18.70 6280.78 940.97 1138037.19 00:06:03.801 00:06:04.061 true 00:06:04.061 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3069773 00:06:04.061 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3069773) - No such process 00:06:04.061 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3069773 00:06:04.061 11:45:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.320 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.320 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:04.320 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:04.320 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:04.320 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.320 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:04.579 null0 00:06:04.579 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.579 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.579 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:04.837 null1 00:06:04.837 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.837 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.837 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:05.096 null2 00:06:05.096 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.096 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.096 11:45:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:05.096 null3 00:06:05.355 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.355 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.355 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:05.355 null4 00:06:05.355 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.355 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.355 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:05.614 null5 00:06:05.614 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.614 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.614 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:05.873 null6 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:05.873 null7 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:05.873 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:05.874 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3076125 3076126 3076128 3076130 3076132 3076134 3076136 3076137 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.133 11:45:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.133 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.133 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.134 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.134 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.134 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.134 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.134 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.134 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.394 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.394 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.394 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.394 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.394 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.395 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.655 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.655 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.655 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.655 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.655 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.655 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.655 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.655 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.914 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.915 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.915 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.915 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.915 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.915 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.915 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.915 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.177 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.177 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.177 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.177 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.177 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.177 11:45:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.177 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.435 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.435 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.435 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.435 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.435 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.435 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.435 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.435 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.695 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.954 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.954 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.954 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.954 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.954 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.954 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.954 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.954 11:45:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.954 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.954 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.954 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.222 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.223 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.223 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.223 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:08.223 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.223 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.223 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.482 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.741 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:08.741 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.741 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.741 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.741 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.741 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.741 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:08.741 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.000 11:45:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.000 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.000 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.000 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.259 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.260 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.260 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.260 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.260 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.519 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.519 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.519 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.519 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.519 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.519 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.519 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.519 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.779 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.038 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.038 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.038 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.038 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.038 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.038 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.038 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.038 11:45:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.038 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.038 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.038 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.038 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.038 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.039 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.039 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.039 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.039 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.039 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.039 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.039 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.039 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.039 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:10.298 rmmod nvme_rdma 00:06:10.298 rmmod nvme_fabrics 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3069300 ']' 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3069300 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3069300 ']' 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3069300 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3069300 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3069300' 00:06:10.298 killing process with pid 3069300 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3069300 00:06:10.298 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3069300 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:10.558 00:06:10.558 real 0m47.478s 00:06:10.558 user 3m22.169s 00:06:10.558 sys 0m11.661s 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:10.558 ************************************ 00:06:10.558 END TEST nvmf_ns_hotplug_stress 00:06:10.558 ************************************ 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.558 ************************************ 00:06:10.558 START TEST nvmf_delete_subsystem 00:06:10.558 ************************************ 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:10.558 * Looking for test storage... 00:06:10.558 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.558 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.818 --rc genhtml_branch_coverage=1 00:06:10.818 --rc genhtml_function_coverage=1 00:06:10.818 --rc genhtml_legend=1 00:06:10.818 --rc geninfo_all_blocks=1 00:06:10.818 --rc geninfo_unexecuted_blocks=1 00:06:10.818 00:06:10.818 ' 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.818 --rc genhtml_branch_coverage=1 00:06:10.818 --rc genhtml_function_coverage=1 00:06:10.818 --rc genhtml_legend=1 00:06:10.818 --rc geninfo_all_blocks=1 00:06:10.818 --rc geninfo_unexecuted_blocks=1 00:06:10.818 00:06:10.818 ' 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.818 --rc genhtml_branch_coverage=1 00:06:10.818 --rc genhtml_function_coverage=1 00:06:10.818 --rc genhtml_legend=1 00:06:10.818 --rc geninfo_all_blocks=1 00:06:10.818 --rc geninfo_unexecuted_blocks=1 00:06:10.818 00:06:10.818 ' 00:06:10.818 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.818 --rc genhtml_branch_coverage=1 00:06:10.818 --rc genhtml_function_coverage=1 00:06:10.818 --rc genhtml_legend=1 00:06:10.819 --rc geninfo_all_blocks=1 00:06:10.819 --rc geninfo_unexecuted_blocks=1 00:06:10.819 00:06:10.819 ' 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.819 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.819 11:45:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:06:17.390 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:06:17.390 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:06:17.390 Found net devices under 0000:da:00.0: mlx_0_0 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:06:17.390 Found net devices under 0000:da:00.1: mlx_0_1 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:17.390 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:17.391 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:17.391 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:06:17.391 altname enp218s0f0np0 00:06:17.391 altname ens818f0np0 00:06:17.391 inet 192.168.100.8/24 scope global mlx_0_0 00:06:17.391 valid_lft forever preferred_lft forever 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:17.391 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:17.391 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:06:17.391 altname enp218s0f1np1 00:06:17.391 altname ens818f1np1 00:06:17.391 inet 192.168.100.9/24 scope global mlx_0_1 00:06:17.391 valid_lft forever preferred_lft forever 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:17.391 192.168.100.9' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:17.391 192.168.100.9' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:17.391 192.168.100.9' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3080273 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3080273 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3080273 ']' 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.391 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.392 [2024-12-09 11:45:24.699499] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:06:17.392 [2024-12-09 11:45:24.699545] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.392 [2024-12-09 11:45:24.775308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.392 [2024-12-09 11:45:24.816045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.392 [2024-12-09 11:45:24.816085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.392 [2024-12-09 11:45:24.816092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.392 [2024-12-09 11:45:24.816098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.392 [2024-12-09 11:45:24.816103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.392 [2024-12-09 11:45:24.817294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.392 [2024-12-09 11:45:24.817296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.392 11:45:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.392 [2024-12-09 11:45:24.975234] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23621f0/0x23666e0) succeed. 00:06:17.392 [2024-12-09 11:45:24.984853] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2363740/0x23a7d80) succeed. 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.392 [2024-12-09 11:45:25.067398] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.392 NULL1 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.392 Delay0 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3080316 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:17.392 11:45:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:17.392 [2024-12-09 11:45:25.201531] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:19.290 11:45:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:19.290 11:45:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.290 11:45:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.223 NVMe io qpair process completion error 00:06:20.223 NVMe io qpair process completion error 00:06:20.481 NVMe io qpair process completion error 00:06:20.481 NVMe io qpair process completion error 00:06:20.481 NVMe io qpair process completion error 00:06:20.481 NVMe io qpair process completion error 00:06:20.481 11:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.481 11:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:20.481 11:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3080316 00:06:20.481 11:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:21.046 11:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:21.046 11:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3080316 00:06:21.046 11:45:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Write completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Write completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Write completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Write completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Read completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.304 Write completed with error (sct=0, sc=8) 00:06:21.304 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 starting I/O failed: -6 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Read completed with error (sct=0, sc=8) 00:06:21.305 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Write completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Read completed with error (sct=0, sc=8) 00:06:21.306 Initializing NVMe Controllers 00:06:21.306 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:21.306 Controller IO queue size 128, less than required. 00:06:21.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:21.306 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:21.306 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:21.306 Initialization complete. Launching workers. 00:06:21.306 ======================================================== 00:06:21.306 Latency(us) 00:06:21.306 Device Information : IOPS MiB/s Average min max 00:06:21.306 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.57 0.04 1591965.80 1000091.91 2969283.60 00:06:21.306 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.57 0.04 1593369.89 1000805.12 2970181.92 00:06:21.306 ======================================================== 00:06:21.306 Total : 161.15 0.08 1592667.84 1000091.91 2970181.92 00:06:21.306 00:06:21.306 [2024-12-09 11:45:29.293443] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:06:21.306 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:21.306 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3080316 00:06:21.306 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:21.306 [2024-12-09 11:45:29.308101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:06:21.306 [2024-12-09 11:45:29.308123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:06:21.306 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3080316 00:06:21.870 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3080316) - No such process 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3080316 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3080316 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.870 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3080316 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.871 [2024-12-09 11:45:29.824801] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3081226 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:21.871 11:45:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.128 [2024-12-09 11:45:29.938017] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:22.385 11:45:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.386 11:45:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:22.386 11:45:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.951 11:45:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.951 11:45:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:22.951 11:45:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.516 11:45:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.516 11:45:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:23.516 11:45:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.080 11:45:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.080 11:45:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:24.080 11:45:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.338 11:45:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.338 11:45:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:24.338 11:45:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.903 11:45:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.903 11:45:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:24.903 11:45:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.468 11:45:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.468 11:45:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:25.468 11:45:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.035 11:45:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.035 11:45:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:26.035 11:45:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.600 11:45:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.601 11:45:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:26.601 11:45:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.858 11:45:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.858 11:45:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:26.858 11:45:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.423 11:45:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.423 11:45:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:27.423 11:45:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.988 11:45:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.988 11:45:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:27.988 11:45:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.552 11:45:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.552 11:45:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:28.552 11:45:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.117 11:45:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.117 11:45:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:29.117 11:45:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.117 Initializing NVMe Controllers 00:06:29.117 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:29.117 Controller IO queue size 128, less than required. 00:06:29.117 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.117 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:29.117 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:29.117 Initialization complete. Launching workers. 00:06:29.117 ======================================================== 00:06:29.117 Latency(us) 00:06:29.117 Device Information : IOPS MiB/s Average min max 00:06:29.117 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001434.23 1000058.82 1004394.37 00:06:29.117 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002487.56 1000063.66 1006331.27 00:06:29.117 ======================================================== 00:06:29.117 Total : 256.00 0.12 1001960.90 1000058.82 1006331.27 00:06:29.117 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3081226 00:06:29.374 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3081226) - No such process 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3081226 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:29.374 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:29.374 rmmod nvme_rdma 00:06:29.631 rmmod nvme_fabrics 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3080273 ']' 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3080273 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3080273 ']' 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3080273 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3080273 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3080273' 00:06:29.631 killing process with pid 3080273 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3080273 00:06:29.631 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3080273 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:29.889 00:06:29.889 real 0m19.227s 00:06:29.889 user 0m48.970s 00:06:29.889 sys 0m5.429s 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.889 ************************************ 00:06:29.889 END TEST nvmf_delete_subsystem 00:06:29.889 ************************************ 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.889 ************************************ 00:06:29.889 START TEST nvmf_host_management 00:06:29.889 ************************************ 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:29.889 * Looking for test storage... 00:06:29.889 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.889 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.148 --rc genhtml_branch_coverage=1 00:06:30.148 --rc genhtml_function_coverage=1 00:06:30.148 --rc genhtml_legend=1 00:06:30.148 --rc geninfo_all_blocks=1 00:06:30.148 --rc geninfo_unexecuted_blocks=1 00:06:30.148 00:06:30.148 ' 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.148 --rc genhtml_branch_coverage=1 00:06:30.148 --rc genhtml_function_coverage=1 00:06:30.148 --rc genhtml_legend=1 00:06:30.148 --rc geninfo_all_blocks=1 00:06:30.148 --rc geninfo_unexecuted_blocks=1 00:06:30.148 00:06:30.148 ' 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.148 --rc genhtml_branch_coverage=1 00:06:30.148 --rc genhtml_function_coverage=1 00:06:30.148 --rc genhtml_legend=1 00:06:30.148 --rc geninfo_all_blocks=1 00:06:30.148 --rc geninfo_unexecuted_blocks=1 00:06:30.148 00:06:30.148 ' 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.148 --rc genhtml_branch_coverage=1 00:06:30.148 --rc genhtml_function_coverage=1 00:06:30.148 --rc genhtml_legend=1 00:06:30.148 --rc geninfo_all_blocks=1 00:06:30.148 --rc geninfo_unexecuted_blocks=1 00:06:30.148 00:06:30.148 ' 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.148 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.149 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.149 11:45:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.149 11:45:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:06:36.719 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:06:36.719 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:06:36.719 Found net devices under 0000:da:00.0: mlx_0_0 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:06:36.719 Found net devices under 0000:da:00.1: mlx_0_1 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:36.719 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:36.720 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:36.720 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:06:36.720 altname enp218s0f0np0 00:06:36.720 altname ens818f0np0 00:06:36.720 inet 192.168.100.8/24 scope global mlx_0_0 00:06:36.720 valid_lft forever preferred_lft forever 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:36.720 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:36.720 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:06:36.720 altname enp218s0f1np1 00:06:36.720 altname ens818f1np1 00:06:36.720 inet 192.168.100.9/24 scope global mlx_0_1 00:06:36.720 valid_lft forever preferred_lft forever 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:36.720 192.168.100.9' 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:36.720 192.168.100.9' 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:36.720 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:36.720 192.168.100.9' 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3085696 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3085696 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3085696 ']' 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.721 11:45:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.721 [2024-12-09 11:45:43.979042] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:06:36.721 [2024-12-09 11:45:43.979093] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.721 [2024-12-09 11:45:44.057098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.721 [2024-12-09 11:45:44.100722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.721 [2024-12-09 11:45:44.100760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.721 [2024-12-09 11:45:44.100767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.721 [2024-12-09 11:45:44.100773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.721 [2024-12-09 11:45:44.100777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.721 [2024-12-09 11:45:44.102313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.721 [2024-12-09 11:45:44.102425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.721 [2024-12-09 11:45:44.102531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.721 [2024-12-09 11:45:44.102532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.721 [2024-12-09 11:45:44.263606] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22a9c40/0x22ae130) succeed. 00:06:36.721 [2024-12-09 11:45:44.275197] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22ab2d0/0x22ef7d0) succeed. 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.721 Malloc0 00:06:36.721 [2024-12-09 11:45:44.468924] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3085750 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3085750 /var/tmp/bdevperf.sock 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3085750 ']' 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:36.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:36.721 { 00:06:36.721 "params": { 00:06:36.721 "name": "Nvme$subsystem", 00:06:36.721 "trtype": "$TEST_TRANSPORT", 00:06:36.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:36.721 "adrfam": "ipv4", 00:06:36.721 "trsvcid": "$NVMF_PORT", 00:06:36.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:36.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:36.721 "hdgst": ${hdgst:-false}, 00:06:36.721 "ddgst": ${ddgst:-false} 00:06:36.721 }, 00:06:36.721 "method": "bdev_nvme_attach_controller" 00:06:36.721 } 00:06:36.721 EOF 00:06:36.721 )") 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:36.721 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:36.721 "params": { 00:06:36.721 "name": "Nvme0", 00:06:36.721 "trtype": "rdma", 00:06:36.721 "traddr": "192.168.100.8", 00:06:36.721 "adrfam": "ipv4", 00:06:36.721 "trsvcid": "4420", 00:06:36.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:36.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:36.721 "hdgst": false, 00:06:36.721 "ddgst": false 00:06:36.721 }, 00:06:36.721 "method": "bdev_nvme_attach_controller" 00:06:36.721 }' 00:06:36.721 [2024-12-09 11:45:44.560906] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:06:36.721 [2024-12-09 11:45:44.560955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085750 ] 00:06:36.721 [2024-12-09 11:45:44.641908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.721 [2024-12-09 11:45:44.683106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.979 Running I/O for 10 seconds... 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=172 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 172 -ge 100 ']' 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:36.979 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:36.980 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:36.980 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.980 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.980 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.980 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:36.980 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.980 11:45:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.980 11:45:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.980 11:45:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:38.177 297.00 IOPS, 18.56 MiB/s [2024-12-09T10:45:46.230Z] [2024-12-09 11:45:45.992106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2fa00 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f980 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f900 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff880 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef800 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf780 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf700 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf680 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf600 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f580 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8f500 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7f480 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6f400 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5f380 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4f300 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3f280 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2f200 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.177 [2024-12-09 11:45:45.992588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1f180 len:0x10000 key:0x182900 00:06:38.177 [2024-12-09 11:45:45.992594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0f100 len:0x10000 key:0x182900 00:06:38.178 [2024-12-09 11:45:45.992612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000df0000 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff80 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcff00 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfe80 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafe00 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fd80 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fd00 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fc80 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6fc00 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5fb80 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4fb00 len:0x10000 key:0x182a00 00:06:38.178 [2024-12-09 11:45:45.992778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009fc7000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009fe8000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009997000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000099b8000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009367000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009388000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008893000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088b4000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088d5000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3c6000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3a5000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a384000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a363000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.992992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a342000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.992999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.993007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a321000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.993015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.993023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a300000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.993029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.993037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6ff000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.993044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.993052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6de000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.993058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.993067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6bd000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.993074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.993083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a69c000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.993090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.993098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a67b000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.993105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.993113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a65a000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.993120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.993128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a639000 len:0x10000 key:0x182400 00:06:38.178 [2024-12-09 11:45:45.993137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:25dbc000 sqhd:7210 p:0 m:0 dnr:0 00:06:38.178 [2024-12-09 11:45:45.995921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:38.178 task offset: 40960 on job bdev=Nvme0n1 fails 00:06:38.179 00:06:38.179 Latency(us) 00:06:38.179 [2024-12-09T10:45:46.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.179 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:38.179 Job: Nvme0n1 ended in about 1.13 seconds with error 00:06:38.179 Verification LBA range: start 0x0 length 0x400 00:06:38.179 Nvme0n1 : 1.13 263.36 16.46 56.75 0.00 197965.16 2527.82 1014622.11 00:06:38.179 [2024-12-09T10:45:46.232Z] =================================================================================================================== 00:06:38.179 [2024-12-09T10:45:46.232Z] Total : 263.36 16.46 56.75 0.00 197965.16 2527.82 1014622.11 00:06:38.179 [2024-12-09 11:45:45.998430] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3085750 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:38.179 { 00:06:38.179 "params": { 00:06:38.179 "name": "Nvme$subsystem", 00:06:38.179 "trtype": "$TEST_TRANSPORT", 00:06:38.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:38.179 "adrfam": "ipv4", 00:06:38.179 "trsvcid": "$NVMF_PORT", 00:06:38.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:38.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:38.179 "hdgst": ${hdgst:-false}, 00:06:38.179 "ddgst": ${ddgst:-false} 00:06:38.179 }, 00:06:38.179 "method": "bdev_nvme_attach_controller" 00:06:38.179 } 00:06:38.179 EOF 00:06:38.179 )") 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:38.179 11:45:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:38.179 "params": { 00:06:38.179 "name": "Nvme0", 00:06:38.179 "trtype": "rdma", 00:06:38.179 "traddr": "192.168.100.8", 00:06:38.179 "adrfam": "ipv4", 00:06:38.179 "trsvcid": "4420", 00:06:38.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:38.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:38.179 "hdgst": false, 00:06:38.179 "ddgst": false 00:06:38.179 }, 00:06:38.179 "method": "bdev_nvme_attach_controller" 00:06:38.179 }' 00:06:38.179 [2024-12-09 11:45:46.056995] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:06:38.179 [2024-12-09 11:45:46.057044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086060 ] 00:06:38.179 [2024-12-09 11:45:46.138081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.179 [2024-12-09 11:45:46.179070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.437 Running I/O for 1 seconds... 00:06:39.372 2989.00 IOPS, 186.81 MiB/s 00:06:39.372 Latency(us) 00:06:39.372 [2024-12-09T10:45:47.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.372 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:39.372 Verification LBA range: start 0x0 length 0x400 00:06:39.372 Nvme0n1 : 1.01 3008.20 188.01 0.00 0.00 20837.47 963.54 39945.75 00:06:39.372 [2024-12-09T10:45:47.425Z] =================================================================================================================== 00:06:39.372 [2024-12-09T10:45:47.425Z] Total : 3008.20 188.01 0.00 0.00 20837.47 963.54 39945.75 00:06:39.630 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3085750 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:39.630 rmmod nvme_rdma 00:06:39.630 rmmod nvme_fabrics 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3085696 ']' 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3085696 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3085696 ']' 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3085696 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3085696 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3085696' 00:06:39.630 killing process with pid 3085696 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3085696 00:06:39.630 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3085696 00:06:39.889 [2024-12-09 11:45:47.914975] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:39.889 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:39.889 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:39.889 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:39.889 00:06:39.889 real 0m10.139s 00:06:39.889 user 0m19.810s 00:06:39.889 sys 0m5.297s 00:06:39.889 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.889 11:45:47 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.889 ************************************ 00:06:39.889 END TEST nvmf_host_management 00:06:39.889 ************************************ 00:06:40.147 11:45:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:40.147 11:45:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.147 11:45:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.147 11:45:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.147 ************************************ 00:06:40.147 START TEST nvmf_lvol 00:06:40.147 ************************************ 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:40.147 * Looking for test storage... 00:06:40.147 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.147 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.148 --rc genhtml_branch_coverage=1 00:06:40.148 --rc genhtml_function_coverage=1 00:06:40.148 --rc genhtml_legend=1 00:06:40.148 --rc geninfo_all_blocks=1 00:06:40.148 --rc geninfo_unexecuted_blocks=1 00:06:40.148 00:06:40.148 ' 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.148 --rc genhtml_branch_coverage=1 00:06:40.148 --rc genhtml_function_coverage=1 00:06:40.148 --rc genhtml_legend=1 00:06:40.148 --rc geninfo_all_blocks=1 00:06:40.148 --rc geninfo_unexecuted_blocks=1 00:06:40.148 00:06:40.148 ' 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.148 --rc genhtml_branch_coverage=1 00:06:40.148 --rc genhtml_function_coverage=1 00:06:40.148 --rc genhtml_legend=1 00:06:40.148 --rc geninfo_all_blocks=1 00:06:40.148 --rc geninfo_unexecuted_blocks=1 00:06:40.148 00:06:40.148 ' 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.148 --rc genhtml_branch_coverage=1 00:06:40.148 --rc genhtml_function_coverage=1 00:06:40.148 --rc genhtml_legend=1 00:06:40.148 --rc geninfo_all_blocks=1 00:06:40.148 --rc geninfo_unexecuted_blocks=1 00:06:40.148 00:06:40.148 ' 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:40.148 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:40.407 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.408 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.408 11:45:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:06:46.978 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:06:46.978 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:06:46.978 Found net devices under 0000:da:00.0: mlx_0_0 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:06:46.978 Found net devices under 0000:da:00.1: mlx_0_1 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:46.978 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:46.979 11:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:46.979 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:46.979 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:06:46.979 altname enp218s0f0np0 00:06:46.979 altname ens818f0np0 00:06:46.979 inet 192.168.100.8/24 scope global mlx_0_0 00:06:46.979 valid_lft forever preferred_lft forever 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:46.979 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:46.979 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:06:46.979 altname enp218s0f1np1 00:06:46.979 altname ens818f1np1 00:06:46.979 inet 192.168.100.9/24 scope global mlx_0_1 00:06:46.979 valid_lft forever preferred_lft forever 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:46.979 192.168.100.9' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:46.979 192.168.100.9' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:46.979 192.168.100.9' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3089524 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3089524 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3089524 ']' 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.979 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.979 [2024-12-09 11:45:54.180903] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:06:46.979 [2024-12-09 11:45:54.180958] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.979 [2024-12-09 11:45:54.257209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.979 [2024-12-09 11:45:54.297834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.979 [2024-12-09 11:45:54.297871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.980 [2024-12-09 11:45:54.297879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.980 [2024-12-09 11:45:54.297886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.980 [2024-12-09 11:45:54.297892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.980 [2024-12-09 11:45:54.299228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.980 [2024-12-09 11:45:54.299341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.980 [2024-12-09 11:45:54.299342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:46.980 [2024-12-09 11:45:54.643238] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19dad80/0x19df270) succeed. 00:06:46.980 [2024-12-09 11:45:54.654055] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19dc370/0x1a20910) succeed. 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:46.980 11:45:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:47.237 11:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:47.237 11:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:47.495 11:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:47.753 11:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fca1d915-c3b2-4b30-9ec1-513fbc912d35 00:06:47.753 11:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fca1d915-c3b2-4b30-9ec1-513fbc912d35 lvol 20 00:06:48.014 11:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7073328d-2e52-4cb5-b1dd-c6853b4a1dcd 00:06:48.014 11:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:48.014 11:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7073328d-2e52-4cb5-b1dd-c6853b4a1dcd 00:06:48.273 11:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:48.531 [2024-12-09 11:45:56.351598] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:48.531 11:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:48.531 11:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:48.531 11:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3090011 00:06:48.531 11:45:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:49.904 11:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7073328d-2e52-4cb5-b1dd-c6853b4a1dcd MY_SNAPSHOT 00:06:49.904 11:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=358d57e0-7fa3-4af1-94a1-6ff92d462607 00:06:49.904 11:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7073328d-2e52-4cb5-b1dd-c6853b4a1dcd 30 00:06:50.162 11:45:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 358d57e0-7fa3-4af1-94a1-6ff92d462607 MY_CLONE 00:06:50.162 11:45:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f163fc9f-526e-45a0-9e08-57bff79ffdac 00:06:50.162 11:45:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f163fc9f-526e-45a0-9e08-57bff79ffdac 00:06:50.420 11:45:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3090011 00:07:00.384 Initializing NVMe Controllers 00:07:00.384 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:00.384 Controller IO queue size 128, less than required. 00:07:00.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:00.384 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:00.384 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:00.384 Initialization complete. Launching workers. 00:07:00.384 ======================================================== 00:07:00.384 Latency(us) 00:07:00.384 Device Information : IOPS MiB/s Average min max 00:07:00.384 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16599.60 64.84 7712.52 2563.55 46701.94 00:07:00.384 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16630.90 64.96 7697.81 3230.02 50672.81 00:07:00.384 ======================================================== 00:07:00.384 Total : 33230.49 129.81 7705.16 2563.55 50672.81 00:07:00.384 00:07:00.384 11:46:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:00.384 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7073328d-2e52-4cb5-b1dd-c6853b4a1dcd 00:07:00.384 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fca1d915-c3b2-4b30-9ec1-513fbc912d35 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:00.642 rmmod nvme_rdma 00:07:00.642 rmmod nvme_fabrics 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3089524 ']' 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3089524 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3089524 ']' 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3089524 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3089524 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3089524' 00:07:00.642 killing process with pid 3089524 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3089524 00:07:00.642 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3089524 00:07:00.900 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:00.900 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:00.900 00:07:00.900 real 0m20.942s 00:07:00.900 user 1m10.498s 00:07:00.900 sys 0m5.478s 00:07:00.900 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.900 11:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.900 ************************************ 00:07:00.900 END TEST nvmf_lvol 00:07:00.900 ************************************ 00:07:01.160 11:46:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:01.160 11:46:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.160 11:46:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.160 11:46:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.160 ************************************ 00:07:01.160 START TEST nvmf_lvs_grow 00:07:01.160 ************************************ 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:01.160 * Looking for test storage... 00:07:01.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.160 --rc genhtml_branch_coverage=1 00:07:01.160 --rc genhtml_function_coverage=1 00:07:01.160 --rc genhtml_legend=1 00:07:01.160 --rc geninfo_all_blocks=1 00:07:01.160 --rc geninfo_unexecuted_blocks=1 00:07:01.160 00:07:01.160 ' 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.160 --rc genhtml_branch_coverage=1 00:07:01.160 --rc genhtml_function_coverage=1 00:07:01.160 --rc genhtml_legend=1 00:07:01.160 --rc geninfo_all_blocks=1 00:07:01.160 --rc geninfo_unexecuted_blocks=1 00:07:01.160 00:07:01.160 ' 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.160 --rc genhtml_branch_coverage=1 00:07:01.160 --rc genhtml_function_coverage=1 00:07:01.160 --rc genhtml_legend=1 00:07:01.160 --rc geninfo_all_blocks=1 00:07:01.160 --rc geninfo_unexecuted_blocks=1 00:07:01.160 00:07:01.160 ' 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.160 --rc genhtml_branch_coverage=1 00:07:01.160 --rc genhtml_function_coverage=1 00:07:01.160 --rc genhtml_legend=1 00:07:01.160 --rc geninfo_all_blocks=1 00:07:01.160 --rc geninfo_unexecuted_blocks=1 00:07:01.160 00:07:01.160 ' 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.160 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.420 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.420 11:46:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.993 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:07.994 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:07.994 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:07.994 Found net devices under 0000:da:00.0: mlx_0_0 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:07.994 Found net devices under 0000:da:00.1: mlx_0_1 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:07.994 11:46:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:07.994 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:07.994 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:07:07.994 altname enp218s0f0np0 00:07:07.994 altname ens818f0np0 00:07:07.994 inet 192.168.100.8/24 scope global mlx_0_0 00:07:07.994 valid_lft forever preferred_lft forever 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:07.994 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:07.994 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:07:07.994 altname enp218s0f1np1 00:07:07.994 altname ens818f1np1 00:07:07.994 inet 192.168.100.9/24 scope global mlx_0_1 00:07:07.994 valid_lft forever preferred_lft forever 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:07.994 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:07.995 192.168.100.9' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:07.995 192.168.100.9' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:07.995 192.168.100.9' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3095175 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3095175 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3095175 ']' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.995 [2024-12-09 11:46:15.231137] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:07.995 [2024-12-09 11:46:15.231193] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.995 [2024-12-09 11:46:15.307276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.995 [2024-12-09 11:46:15.347424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.995 [2024-12-09 11:46:15.347460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.995 [2024-12-09 11:46:15.347467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.995 [2024-12-09 11:46:15.347472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.995 [2024-12-09 11:46:15.347477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.995 [2024-12-09 11:46:15.348064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:07.995 [2024-12-09 11:46:15.678621] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f44620/0x1f48b10) succeed. 00:07:07.995 [2024-12-09 11:46:15.688402] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f45ad0/0x1f8a1b0) succeed. 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.995 ************************************ 00:07:07.995 START TEST lvs_grow_clean 00:07:07.995 ************************************ 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.995 11:46:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:07.995 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:07.995 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:08.255 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:08.255 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:08.255 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:08.516 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:08.516 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:08.516 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c9a2607f-0068-4eaa-8932-044cdb35684a lvol 150 00:07:08.775 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=764389e8-cb82-4506-ab01-ee04a3f6c9a5 00:07:08.775 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.775 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:08.775 [2024-12-09 11:46:16.767299] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:08.775 [2024-12-09 11:46:16.767351] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:08.775 true 00:07:08.775 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:08.775 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:09.034 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:09.034 11:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.292 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 764389e8-cb82-4506-ab01-ee04a3f6c9a5 00:07:09.292 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:09.552 [2024-12-09 11:46:17.489634] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:09.552 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3095670 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3095670 /var/tmp/bdevperf.sock 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3095670 ']' 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.811 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:09.811 [2024-12-09 11:46:17.735687] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:09.811 [2024-12-09 11:46:17.735733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095670 ] 00:07:09.811 [2024-12-09 11:46:17.812412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.811 [2024-12-09 11:46:17.852258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.071 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.071 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:10.071 11:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:10.331 Nvme0n1 00:07:10.331 11:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:10.591 [ 00:07:10.591 { 00:07:10.591 "name": "Nvme0n1", 00:07:10.591 "aliases": [ 00:07:10.591 "764389e8-cb82-4506-ab01-ee04a3f6c9a5" 00:07:10.591 ], 00:07:10.591 "product_name": "NVMe disk", 00:07:10.591 "block_size": 4096, 00:07:10.591 "num_blocks": 38912, 00:07:10.591 "uuid": "764389e8-cb82-4506-ab01-ee04a3f6c9a5", 00:07:10.591 "numa_id": 1, 00:07:10.591 "assigned_rate_limits": { 00:07:10.591 "rw_ios_per_sec": 0, 00:07:10.591 "rw_mbytes_per_sec": 0, 00:07:10.591 "r_mbytes_per_sec": 0, 00:07:10.591 "w_mbytes_per_sec": 0 00:07:10.591 }, 00:07:10.591 "claimed": false, 00:07:10.591 "zoned": false, 00:07:10.591 "supported_io_types": { 00:07:10.591 "read": true, 00:07:10.591 "write": true, 00:07:10.591 "unmap": true, 00:07:10.591 "flush": true, 00:07:10.591 "reset": true, 00:07:10.591 "nvme_admin": true, 00:07:10.591 "nvme_io": true, 00:07:10.591 "nvme_io_md": false, 00:07:10.591 "write_zeroes": true, 00:07:10.591 "zcopy": false, 00:07:10.591 "get_zone_info": false, 00:07:10.591 "zone_management": false, 00:07:10.591 "zone_append": false, 00:07:10.591 "compare": true, 00:07:10.591 "compare_and_write": true, 00:07:10.591 "abort": true, 00:07:10.591 "seek_hole": false, 00:07:10.591 "seek_data": false, 00:07:10.591 "copy": true, 00:07:10.591 "nvme_iov_md": false 00:07:10.591 }, 00:07:10.591 "memory_domains": [ 00:07:10.591 { 00:07:10.591 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:10.591 "dma_device_type": 0 00:07:10.591 } 00:07:10.591 ], 00:07:10.591 "driver_specific": { 00:07:10.591 "nvme": [ 00:07:10.591 { 00:07:10.591 "trid": { 00:07:10.591 "trtype": "RDMA", 00:07:10.591 "adrfam": "IPv4", 00:07:10.591 "traddr": "192.168.100.8", 00:07:10.591 "trsvcid": "4420", 00:07:10.591 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:10.591 }, 00:07:10.591 "ctrlr_data": { 00:07:10.591 "cntlid": 1, 00:07:10.591 "vendor_id": "0x8086", 00:07:10.591 "model_number": "SPDK bdev Controller", 00:07:10.591 "serial_number": "SPDK0", 00:07:10.591 "firmware_revision": "25.01", 00:07:10.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:10.591 "oacs": { 00:07:10.591 "security": 0, 00:07:10.591 "format": 0, 00:07:10.591 "firmware": 0, 00:07:10.591 "ns_manage": 0 00:07:10.591 }, 00:07:10.591 "multi_ctrlr": true, 00:07:10.591 "ana_reporting": false 00:07:10.591 }, 00:07:10.591 "vs": { 00:07:10.591 "nvme_version": "1.3" 00:07:10.591 }, 00:07:10.591 "ns_data": { 00:07:10.591 "id": 1, 00:07:10.592 "can_share": true 00:07:10.592 } 00:07:10.592 } 00:07:10.592 ], 00:07:10.592 "mp_policy": "active_passive" 00:07:10.592 } 00:07:10.592 } 00:07:10.592 ] 00:07:10.592 11:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3095893 00:07:10.592 11:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:10.592 11:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:10.592 Running I/O for 10 seconds... 00:07:11.531 Latency(us) 00:07:11.531 [2024-12-09T10:46:19.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.531 Nvme0n1 : 1.00 34242.00 133.76 0.00 0.00 0.00 0.00 0.00 00:07:11.531 [2024-12-09T10:46:19.584Z] =================================================================================================================== 00:07:11.531 [2024-12-09T10:46:19.584Z] Total : 34242.00 133.76 0.00 0.00 0.00 0.00 0.00 00:07:11.531 00:07:12.472 11:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:12.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.472 Nvme0n1 : 2.00 34336.00 134.12 0.00 0.00 0.00 0.00 0.00 00:07:12.472 [2024-12-09T10:46:20.525Z] =================================================================================================================== 00:07:12.472 [2024-12-09T10:46:20.525Z] Total : 34336.00 134.12 0.00 0.00 0.00 0.00 0.00 00:07:12.472 00:07:12.732 true 00:07:12.732 11:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:12.732 11:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:12.992 11:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:12.992 11:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:12.992 11:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3095893 00:07:13.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.560 Nvme0n1 : 3.00 34493.67 134.74 0.00 0.00 0.00 0.00 0.00 00:07:13.560 [2024-12-09T10:46:21.613Z] =================================================================================================================== 00:07:13.560 [2024-12-09T10:46:21.613Z] Total : 34493.67 134.74 0.00 0.00 0.00 0.00 0.00 00:07:13.560 00:07:14.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.496 Nvme0n1 : 4.00 34638.00 135.30 0.00 0.00 0.00 0.00 0.00 00:07:14.496 [2024-12-09T10:46:22.549Z] =================================================================================================================== 00:07:14.496 [2024-12-09T10:46:22.549Z] Total : 34638.00 135.30 0.00 0.00 0.00 0.00 0.00 00:07:14.496 00:07:15.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.874 Nvme0n1 : 5.00 34740.40 135.70 0.00 0.00 0.00 0.00 0.00 00:07:15.874 [2024-12-09T10:46:23.927Z] =================================================================================================================== 00:07:15.874 [2024-12-09T10:46:23.927Z] Total : 34740.40 135.70 0.00 0.00 0.00 0.00 0.00 00:07:15.874 00:07:16.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.810 Nvme0n1 : 6.00 34708.00 135.58 0.00 0.00 0.00 0.00 0.00 00:07:16.810 [2024-12-09T10:46:24.863Z] =================================================================================================================== 00:07:16.810 [2024-12-09T10:46:24.863Z] Total : 34708.00 135.58 0.00 0.00 0.00 0.00 0.00 00:07:16.810 00:07:17.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.746 Nvme0n1 : 7.00 34766.71 135.81 0.00 0.00 0.00 0.00 0.00 00:07:17.746 [2024-12-09T10:46:25.799Z] =================================================================================================================== 00:07:17.746 [2024-12-09T10:46:25.799Z] Total : 34766.71 135.81 0.00 0.00 0.00 0.00 0.00 00:07:17.746 00:07:18.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.684 Nvme0n1 : 8.00 34815.12 136.00 0.00 0.00 0.00 0.00 0.00 00:07:18.684 [2024-12-09T10:46:26.737Z] =================================================================================================================== 00:07:18.684 [2024-12-09T10:46:26.737Z] Total : 34815.12 136.00 0.00 0.00 0.00 0.00 0.00 00:07:18.684 00:07:19.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.621 Nvme0n1 : 9.00 34857.67 136.16 0.00 0.00 0.00 0.00 0.00 00:07:19.621 [2024-12-09T10:46:27.674Z] =================================================================================================================== 00:07:19.621 [2024-12-09T10:46:27.674Z] Total : 34857.67 136.16 0.00 0.00 0.00 0.00 0.00 00:07:19.621 00:07:20.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.558 Nvme0n1 : 10.00 34888.60 136.28 0.00 0.00 0.00 0.00 0.00 00:07:20.558 [2024-12-09T10:46:28.611Z] =================================================================================================================== 00:07:20.558 [2024-12-09T10:46:28.611Z] Total : 34888.60 136.28 0.00 0.00 0.00 0.00 0.00 00:07:20.558 00:07:20.558 00:07:20.558 Latency(us) 00:07:20.558 [2024-12-09T10:46:28.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.558 Nvme0n1 : 10.00 34889.20 136.29 0.00 0.00 3665.77 2434.19 11172.33 00:07:20.558 [2024-12-09T10:46:28.611Z] =================================================================================================================== 00:07:20.558 [2024-12-09T10:46:28.611Z] Total : 34889.20 136.29 0.00 0.00 3665.77 2434.19 11172.33 00:07:20.558 { 00:07:20.558 "results": [ 00:07:20.558 { 00:07:20.558 "job": "Nvme0n1", 00:07:20.558 "core_mask": "0x2", 00:07:20.559 "workload": "randwrite", 00:07:20.559 "status": "finished", 00:07:20.559 "queue_depth": 128, 00:07:20.559 "io_size": 4096, 00:07:20.559 "runtime": 10.003096, 00:07:20.559 "iops": 34889.19830420502, 00:07:20.559 "mibps": 136.28593087580086, 00:07:20.559 "io_failed": 0, 00:07:20.559 "io_timeout": 0, 00:07:20.559 "avg_latency_us": 3665.7719202947196, 00:07:20.559 "min_latency_us": 2434.194285714286, 00:07:20.559 "max_latency_us": 11172.327619047619 00:07:20.559 } 00:07:20.559 ], 00:07:20.559 "core_count": 1 00:07:20.559 } 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3095670 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3095670 ']' 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3095670 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3095670 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3095670' 00:07:20.559 killing process with pid 3095670 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3095670 00:07:20.559 Received shutdown signal, test time was about 10.000000 seconds 00:07:20.559 00:07:20.559 Latency(us) 00:07:20.559 [2024-12-09T10:46:28.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.559 [2024-12-09T10:46:28.612Z] =================================================================================================================== 00:07:20.559 [2024-12-09T10:46:28.612Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:20.559 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3095670 00:07:20.818 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:21.077 11:46:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:21.335 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:21.335 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.594 [2024-12-09 11:46:29.591850] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:21.594 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:21.854 request: 00:07:21.854 { 00:07:21.854 "uuid": "c9a2607f-0068-4eaa-8932-044cdb35684a", 00:07:21.854 "method": "bdev_lvol_get_lvstores", 00:07:21.854 "req_id": 1 00:07:21.854 } 00:07:21.854 Got JSON-RPC error response 00:07:21.854 response: 00:07:21.854 { 00:07:21.854 "code": -19, 00:07:21.854 "message": "No such device" 00:07:21.854 } 00:07:21.854 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:21.854 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.854 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.854 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.854 11:46:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.112 aio_bdev 00:07:22.112 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 764389e8-cb82-4506-ab01-ee04a3f6c9a5 00:07:22.112 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=764389e8-cb82-4506-ab01-ee04a3f6c9a5 00:07:22.112 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.112 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:22.112 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.112 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.112 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:22.371 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 764389e8-cb82-4506-ab01-ee04a3f6c9a5 -t 2000 00:07:22.371 [ 00:07:22.371 { 00:07:22.371 "name": "764389e8-cb82-4506-ab01-ee04a3f6c9a5", 00:07:22.371 "aliases": [ 00:07:22.371 "lvs/lvol" 00:07:22.371 ], 00:07:22.371 "product_name": "Logical Volume", 00:07:22.371 "block_size": 4096, 00:07:22.371 "num_blocks": 38912, 00:07:22.371 "uuid": "764389e8-cb82-4506-ab01-ee04a3f6c9a5", 00:07:22.371 "assigned_rate_limits": { 00:07:22.371 "rw_ios_per_sec": 0, 00:07:22.371 "rw_mbytes_per_sec": 0, 00:07:22.371 "r_mbytes_per_sec": 0, 00:07:22.371 "w_mbytes_per_sec": 0 00:07:22.371 }, 00:07:22.371 "claimed": false, 00:07:22.371 "zoned": false, 00:07:22.371 "supported_io_types": { 00:07:22.371 "read": true, 00:07:22.371 "write": true, 00:07:22.371 "unmap": true, 00:07:22.371 "flush": false, 00:07:22.371 "reset": true, 00:07:22.371 "nvme_admin": false, 00:07:22.371 "nvme_io": false, 00:07:22.371 "nvme_io_md": false, 00:07:22.371 "write_zeroes": true, 00:07:22.371 "zcopy": false, 00:07:22.371 "get_zone_info": false, 00:07:22.371 "zone_management": false, 00:07:22.371 "zone_append": false, 00:07:22.371 "compare": false, 00:07:22.371 "compare_and_write": false, 00:07:22.371 "abort": false, 00:07:22.371 "seek_hole": true, 00:07:22.371 "seek_data": true, 00:07:22.371 "copy": false, 00:07:22.371 "nvme_iov_md": false 00:07:22.371 }, 00:07:22.371 "driver_specific": { 00:07:22.371 "lvol": { 00:07:22.371 "lvol_store_uuid": "c9a2607f-0068-4eaa-8932-044cdb35684a", 00:07:22.371 "base_bdev": "aio_bdev", 00:07:22.371 "thin_provision": false, 00:07:22.371 "num_allocated_clusters": 38, 00:07:22.371 "snapshot": false, 00:07:22.371 "clone": false, 00:07:22.371 "esnap_clone": false 00:07:22.371 } 00:07:22.371 } 00:07:22.371 } 00:07:22.371 ] 00:07:22.371 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:22.371 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:22.371 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:22.630 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:22.630 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:22.630 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:22.889 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:22.889 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 764389e8-cb82-4506-ab01-ee04a3f6c9a5 00:07:22.889 11:46:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9a2607f-0068-4eaa-8932-044cdb35684a 00:07:23.147 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:23.405 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.406 00:07:23.406 real 0m15.560s 00:07:23.406 user 0m15.581s 00:07:23.406 sys 0m1.013s 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:23.406 ************************************ 00:07:23.406 END TEST lvs_grow_clean 00:07:23.406 ************************************ 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.406 ************************************ 00:07:23.406 START TEST lvs_grow_dirty 00:07:23.406 ************************************ 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.406 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:23.664 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:23.664 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:23.922 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:23.922 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:23.922 11:46:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:24.180 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:24.180 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:24.180 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 lvol 150 00:07:24.180 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6ccc54b5-0b58-4d4e-bce6-3f3e17966022 00:07:24.180 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.180 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:24.438 [2024-12-09 11:46:32.397873] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:24.438 [2024-12-09 11:46:32.397926] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:24.438 true 00:07:24.438 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:24.438 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:24.696 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:24.696 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:24.955 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6ccc54b5-0b58-4d4e-bce6-3f3e17966022 00:07:24.955 11:46:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:25.214 [2024-12-09 11:46:33.136273] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:25.214 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3098379 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3098379 /var/tmp/bdevperf.sock 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3098379 ']' 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:25.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.472 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.472 [2024-12-09 11:46:33.382830] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:25.472 [2024-12-09 11:46:33.382876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098379 ] 00:07:25.472 [2024-12-09 11:46:33.461100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.472 [2024-12-09 11:46:33.503640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.730 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.730 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:25.730 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:25.988 Nvme0n1 00:07:25.988 11:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:26.246 [ 00:07:26.246 { 00:07:26.246 "name": "Nvme0n1", 00:07:26.246 "aliases": [ 00:07:26.246 "6ccc54b5-0b58-4d4e-bce6-3f3e17966022" 00:07:26.246 ], 00:07:26.246 "product_name": "NVMe disk", 00:07:26.246 "block_size": 4096, 00:07:26.246 "num_blocks": 38912, 00:07:26.246 "uuid": "6ccc54b5-0b58-4d4e-bce6-3f3e17966022", 00:07:26.246 "numa_id": 1, 00:07:26.246 "assigned_rate_limits": { 00:07:26.246 "rw_ios_per_sec": 0, 00:07:26.246 "rw_mbytes_per_sec": 0, 00:07:26.246 "r_mbytes_per_sec": 0, 00:07:26.246 "w_mbytes_per_sec": 0 00:07:26.246 }, 00:07:26.246 "claimed": false, 00:07:26.246 "zoned": false, 00:07:26.246 "supported_io_types": { 00:07:26.246 "read": true, 00:07:26.246 "write": true, 00:07:26.246 "unmap": true, 00:07:26.246 "flush": true, 00:07:26.246 "reset": true, 00:07:26.246 "nvme_admin": true, 00:07:26.246 "nvme_io": true, 00:07:26.246 "nvme_io_md": false, 00:07:26.246 "write_zeroes": true, 00:07:26.246 "zcopy": false, 00:07:26.246 "get_zone_info": false, 00:07:26.246 "zone_management": false, 00:07:26.246 "zone_append": false, 00:07:26.246 "compare": true, 00:07:26.246 "compare_and_write": true, 00:07:26.246 "abort": true, 00:07:26.246 "seek_hole": false, 00:07:26.246 "seek_data": false, 00:07:26.246 "copy": true, 00:07:26.246 "nvme_iov_md": false 00:07:26.246 }, 00:07:26.246 "memory_domains": [ 00:07:26.246 { 00:07:26.246 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:26.246 "dma_device_type": 0 00:07:26.246 } 00:07:26.246 ], 00:07:26.246 "driver_specific": { 00:07:26.246 "nvme": [ 00:07:26.246 { 00:07:26.246 "trid": { 00:07:26.246 "trtype": "RDMA", 00:07:26.246 "adrfam": "IPv4", 00:07:26.246 "traddr": "192.168.100.8", 00:07:26.246 "trsvcid": "4420", 00:07:26.246 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:26.246 }, 00:07:26.246 "ctrlr_data": { 00:07:26.246 "cntlid": 1, 00:07:26.246 "vendor_id": "0x8086", 00:07:26.246 "model_number": "SPDK bdev Controller", 00:07:26.246 "serial_number": "SPDK0", 00:07:26.246 "firmware_revision": "25.01", 00:07:26.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.246 "oacs": { 00:07:26.246 "security": 0, 00:07:26.246 "format": 0, 00:07:26.246 "firmware": 0, 00:07:26.246 "ns_manage": 0 00:07:26.246 }, 00:07:26.246 "multi_ctrlr": true, 00:07:26.246 "ana_reporting": false 00:07:26.246 }, 00:07:26.246 "vs": { 00:07:26.246 "nvme_version": "1.3" 00:07:26.246 }, 00:07:26.246 "ns_data": { 00:07:26.246 "id": 1, 00:07:26.246 "can_share": true 00:07:26.246 } 00:07:26.246 } 00:07:26.246 ], 00:07:26.246 "mp_policy": "active_passive" 00:07:26.246 } 00:07:26.246 } 00:07:26.246 ] 00:07:26.246 11:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:26.246 11:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3098504 00:07:26.246 11:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:26.246 Running I/O for 10 seconds... 00:07:27.184 Latency(us) 00:07:27.184 [2024-12-09T10:46:35.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.184 Nvme0n1 : 1.00 34145.00 133.38 0.00 0.00 0.00 0.00 0.00 00:07:27.184 [2024-12-09T10:46:35.237Z] =================================================================================================================== 00:07:27.184 [2024-12-09T10:46:35.237Z] Total : 34145.00 133.38 0.00 0.00 0.00 0.00 0.00 00:07:27.184 00:07:28.116 11:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:28.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.116 Nvme0n1 : 2.00 34463.00 134.62 0.00 0.00 0.00 0.00 0.00 00:07:28.116 [2024-12-09T10:46:36.169Z] =================================================================================================================== 00:07:28.116 [2024-12-09T10:46:36.169Z] Total : 34463.00 134.62 0.00 0.00 0.00 0.00 0.00 00:07:28.116 00:07:28.374 true 00:07:28.374 11:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:28.374 11:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:28.632 11:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:28.632 11:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:28.632 11:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3098504 00:07:29.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.197 Nvme0n1 : 3.00 34539.33 134.92 0.00 0.00 0.00 0.00 0.00 00:07:29.197 [2024-12-09T10:46:37.250Z] =================================================================================================================== 00:07:29.197 [2024-12-09T10:46:37.250Z] Total : 34539.33 134.92 0.00 0.00 0.00 0.00 0.00 00:07:29.197 00:07:30.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.130 Nvme0n1 : 4.00 34560.25 135.00 0.00 0.00 0.00 0.00 0.00 00:07:30.130 [2024-12-09T10:46:38.183Z] =================================================================================================================== 00:07:30.130 [2024-12-09T10:46:38.183Z] Total : 34560.25 135.00 0.00 0.00 0.00 0.00 0.00 00:07:30.130 00:07:31.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.504 Nvme0n1 : 5.00 34579.00 135.07 0.00 0.00 0.00 0.00 0.00 00:07:31.504 [2024-12-09T10:46:39.557Z] =================================================================================================================== 00:07:31.504 [2024-12-09T10:46:39.557Z] Total : 34579.00 135.07 0.00 0.00 0.00 0.00 0.00 00:07:31.504 00:07:32.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.436 Nvme0n1 : 6.00 34629.00 135.27 0.00 0.00 0.00 0.00 0.00 00:07:32.436 [2024-12-09T10:46:40.489Z] =================================================================================================================== 00:07:32.436 [2024-12-09T10:46:40.489Z] Total : 34629.00 135.27 0.00 0.00 0.00 0.00 0.00 00:07:32.436 00:07:33.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.369 Nvme0n1 : 7.00 34646.43 135.34 0.00 0.00 0.00 0.00 0.00 00:07:33.369 [2024-12-09T10:46:41.422Z] =================================================================================================================== 00:07:33.369 [2024-12-09T10:46:41.422Z] Total : 34646.43 135.34 0.00 0.00 0.00 0.00 0.00 00:07:33.369 00:07:34.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.303 Nvme0n1 : 8.00 34704.25 135.56 0.00 0.00 0.00 0.00 0.00 00:07:34.303 [2024-12-09T10:46:42.356Z] =================================================================================================================== 00:07:34.303 [2024-12-09T10:46:42.356Z] Total : 34704.25 135.56 0.00 0.00 0.00 0.00 0.00 00:07:34.303 00:07:35.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.237 Nvme0n1 : 9.00 34755.33 135.76 0.00 0.00 0.00 0.00 0.00 00:07:35.237 [2024-12-09T10:46:43.290Z] =================================================================================================================== 00:07:35.237 [2024-12-09T10:46:43.290Z] Total : 34755.33 135.76 0.00 0.00 0.00 0.00 0.00 00:07:35.237 00:07:36.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.170 Nvme0n1 : 10.00 34806.10 135.96 0.00 0.00 0.00 0.00 0.00 00:07:36.170 [2024-12-09T10:46:44.223Z] =================================================================================================================== 00:07:36.170 [2024-12-09T10:46:44.223Z] Total : 34806.10 135.96 0.00 0.00 0.00 0.00 0.00 00:07:36.170 00:07:36.170 00:07:36.170 Latency(us) 00:07:36.170 [2024-12-09T10:46:44.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.170 Nvme0n1 : 10.00 34805.01 135.96 0.00 0.00 3674.53 2262.55 13169.62 00:07:36.170 [2024-12-09T10:46:44.223Z] =================================================================================================================== 00:07:36.170 [2024-12-09T10:46:44.223Z] Total : 34805.01 135.96 0.00 0.00 3674.53 2262.55 13169.62 00:07:36.170 { 00:07:36.170 "results": [ 00:07:36.170 { 00:07:36.170 "job": "Nvme0n1", 00:07:36.170 "core_mask": "0x2", 00:07:36.170 "workload": "randwrite", 00:07:36.170 "status": "finished", 00:07:36.170 "queue_depth": 128, 00:07:36.170 "io_size": 4096, 00:07:36.170 "runtime": 10.003186, 00:07:36.170 "iops": 34805.01112345607, 00:07:36.170 "mibps": 135.95707470100027, 00:07:36.170 "io_failed": 0, 00:07:36.170 "io_timeout": 0, 00:07:36.170 "avg_latency_us": 3674.52535623571, 00:07:36.170 "min_latency_us": 2262.552380952381, 00:07:36.170 "max_latency_us": 13169.615238095239 00:07:36.170 } 00:07:36.170 ], 00:07:36.170 "core_count": 1 00:07:36.170 } 00:07:36.170 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3098379 00:07:36.170 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3098379 ']' 00:07:36.170 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3098379 00:07:36.170 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:36.170 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.170 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3098379 00:07:36.430 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:36.430 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:36.430 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3098379' 00:07:36.430 killing process with pid 3098379 00:07:36.430 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3098379 00:07:36.430 Received shutdown signal, test time was about 10.000000 seconds 00:07:36.430 00:07:36.430 Latency(us) 00:07:36.430 [2024-12-09T10:46:44.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.430 [2024-12-09T10:46:44.483Z] =================================================================================================================== 00:07:36.430 [2024-12-09T10:46:44.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:36.430 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3098379 00:07:36.430 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:36.688 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.947 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:36.947 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:36.947 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:36.947 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:36.947 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3095175 00:07:36.947 11:46:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3095175 00:07:37.205 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3095175 Killed "${NVMF_APP[@]}" "$@" 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3100346 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3100346 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3100346 ']' 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.205 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.205 [2024-12-09 11:46:45.061836] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:37.205 [2024-12-09 11:46:45.061882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.205 [2024-12-09 11:46:45.138256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.205 [2024-12-09 11:46:45.178605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.206 [2024-12-09 11:46:45.178642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.206 [2024-12-09 11:46:45.178649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.206 [2024-12-09 11:46:45.178654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.206 [2024-12-09 11:46:45.178659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.206 [2024-12-09 11:46:45.179192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.464 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.464 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:37.464 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.464 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.464 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.464 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.464 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.464 [2024-12-09 11:46:45.481303] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:37.464 [2024-12-09 11:46:45.481384] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:37.464 [2024-12-09 11:46:45.481414] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:37.464 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:37.464 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6ccc54b5-0b58-4d4e-bce6-3f3e17966022 00:07:37.722 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6ccc54b5-0b58-4d4e-bce6-3f3e17966022 00:07:37.722 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.722 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:37.722 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.722 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.722 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.722 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6ccc54b5-0b58-4d4e-bce6-3f3e17966022 -t 2000 00:07:37.981 [ 00:07:37.981 { 00:07:37.981 "name": "6ccc54b5-0b58-4d4e-bce6-3f3e17966022", 00:07:37.981 "aliases": [ 00:07:37.981 "lvs/lvol" 00:07:37.981 ], 00:07:37.981 "product_name": "Logical Volume", 00:07:37.981 "block_size": 4096, 00:07:37.981 "num_blocks": 38912, 00:07:37.981 "uuid": "6ccc54b5-0b58-4d4e-bce6-3f3e17966022", 00:07:37.981 "assigned_rate_limits": { 00:07:37.981 "rw_ios_per_sec": 0, 00:07:37.981 "rw_mbytes_per_sec": 0, 00:07:37.981 "r_mbytes_per_sec": 0, 00:07:37.981 "w_mbytes_per_sec": 0 00:07:37.981 }, 00:07:37.981 "claimed": false, 00:07:37.981 "zoned": false, 00:07:37.981 "supported_io_types": { 00:07:37.981 "read": true, 00:07:37.981 "write": true, 00:07:37.981 "unmap": true, 00:07:37.981 "flush": false, 00:07:37.981 "reset": true, 00:07:37.981 "nvme_admin": false, 00:07:37.981 "nvme_io": false, 00:07:37.981 "nvme_io_md": false, 00:07:37.981 "write_zeroes": true, 00:07:37.981 "zcopy": false, 00:07:37.981 "get_zone_info": false, 00:07:37.981 "zone_management": false, 00:07:37.981 "zone_append": false, 00:07:37.981 "compare": false, 00:07:37.981 "compare_and_write": false, 00:07:37.981 "abort": false, 00:07:37.981 "seek_hole": true, 00:07:37.981 "seek_data": true, 00:07:37.981 "copy": false, 00:07:37.981 "nvme_iov_md": false 00:07:37.981 }, 00:07:37.981 "driver_specific": { 00:07:37.981 "lvol": { 00:07:37.981 "lvol_store_uuid": "6f26c19b-5dc0-43b0-9953-c04c6ec0d111", 00:07:37.981 "base_bdev": "aio_bdev", 00:07:37.981 "thin_provision": false, 00:07:37.981 "num_allocated_clusters": 38, 00:07:37.981 "snapshot": false, 00:07:37.981 "clone": false, 00:07:37.981 "esnap_clone": false 00:07:37.981 } 00:07:37.981 } 00:07:37.981 } 00:07:37.981 ] 00:07:37.981 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:37.981 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:37.981 11:46:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:38.239 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:38.239 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:38.239 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:38.239 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:38.239 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.521 [2024-12-09 11:46:46.450204] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:38.521 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:38.780 request: 00:07:38.780 { 00:07:38.780 "uuid": "6f26c19b-5dc0-43b0-9953-c04c6ec0d111", 00:07:38.780 "method": "bdev_lvol_get_lvstores", 00:07:38.780 "req_id": 1 00:07:38.780 } 00:07:38.780 Got JSON-RPC error response 00:07:38.780 response: 00:07:38.780 { 00:07:38.780 "code": -19, 00:07:38.780 "message": "No such device" 00:07:38.780 } 00:07:38.780 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:38.780 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.780 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.780 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.780 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.038 aio_bdev 00:07:39.038 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6ccc54b5-0b58-4d4e-bce6-3f3e17966022 00:07:39.038 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6ccc54b5-0b58-4d4e-bce6-3f3e17966022 00:07:39.038 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.038 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:39.039 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.039 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.039 11:46:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:39.039 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6ccc54b5-0b58-4d4e-bce6-3f3e17966022 -t 2000 00:07:39.302 [ 00:07:39.302 { 00:07:39.302 "name": "6ccc54b5-0b58-4d4e-bce6-3f3e17966022", 00:07:39.302 "aliases": [ 00:07:39.302 "lvs/lvol" 00:07:39.302 ], 00:07:39.302 "product_name": "Logical Volume", 00:07:39.302 "block_size": 4096, 00:07:39.302 "num_blocks": 38912, 00:07:39.302 "uuid": "6ccc54b5-0b58-4d4e-bce6-3f3e17966022", 00:07:39.302 "assigned_rate_limits": { 00:07:39.302 "rw_ios_per_sec": 0, 00:07:39.302 "rw_mbytes_per_sec": 0, 00:07:39.302 "r_mbytes_per_sec": 0, 00:07:39.302 "w_mbytes_per_sec": 0 00:07:39.302 }, 00:07:39.302 "claimed": false, 00:07:39.302 "zoned": false, 00:07:39.302 "supported_io_types": { 00:07:39.302 "read": true, 00:07:39.302 "write": true, 00:07:39.302 "unmap": true, 00:07:39.302 "flush": false, 00:07:39.302 "reset": true, 00:07:39.302 "nvme_admin": false, 00:07:39.302 "nvme_io": false, 00:07:39.302 "nvme_io_md": false, 00:07:39.302 "write_zeroes": true, 00:07:39.302 "zcopy": false, 00:07:39.302 "get_zone_info": false, 00:07:39.302 "zone_management": false, 00:07:39.302 "zone_append": false, 00:07:39.302 "compare": false, 00:07:39.302 "compare_and_write": false, 00:07:39.302 "abort": false, 00:07:39.302 "seek_hole": true, 00:07:39.302 "seek_data": true, 00:07:39.302 "copy": false, 00:07:39.302 "nvme_iov_md": false 00:07:39.302 }, 00:07:39.302 "driver_specific": { 00:07:39.302 "lvol": { 00:07:39.302 "lvol_store_uuid": "6f26c19b-5dc0-43b0-9953-c04c6ec0d111", 00:07:39.302 "base_bdev": "aio_bdev", 00:07:39.302 "thin_provision": false, 00:07:39.302 "num_allocated_clusters": 38, 00:07:39.302 "snapshot": false, 00:07:39.302 "clone": false, 00:07:39.302 "esnap_clone": false 00:07:39.302 } 00:07:39.302 } 00:07:39.302 } 00:07:39.302 ] 00:07:39.302 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:39.302 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:39.302 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:39.561 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:39.561 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:39.561 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:39.561 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:39.561 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6ccc54b5-0b58-4d4e-bce6-3f3e17966022 00:07:39.820 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f26c19b-5dc0-43b0-9953-c04c6ec0d111 00:07:40.078 11:46:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:40.337 00:07:40.337 real 0m16.759s 00:07:40.337 user 0m44.488s 00:07:40.337 sys 0m2.741s 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.337 ************************************ 00:07:40.337 END TEST lvs_grow_dirty 00:07:40.337 ************************************ 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:40.337 nvmf_trace.0 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:40.337 rmmod nvme_rdma 00:07:40.337 rmmod nvme_fabrics 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3100346 ']' 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3100346 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3100346 ']' 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3100346 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3100346 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3100346' 00:07:40.337 killing process with pid 3100346 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3100346 00:07:40.337 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3100346 00:07:40.597 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.597 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:40.597 00:07:40.597 real 0m39.489s 00:07:40.597 user 1m5.679s 00:07:40.597 sys 0m8.634s 00:07:40.597 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.597 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:40.597 ************************************ 00:07:40.597 END TEST nvmf_lvs_grow 00:07:40.597 ************************************ 00:07:40.597 11:46:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:40.597 11:46:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.597 11:46:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.597 11:46:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.597 ************************************ 00:07:40.597 START TEST nvmf_bdev_io_wait 00:07:40.597 ************************************ 00:07:40.597 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:40.857 * Looking for test storage... 00:07:40.857 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.857 --rc genhtml_branch_coverage=1 00:07:40.857 --rc genhtml_function_coverage=1 00:07:40.857 --rc genhtml_legend=1 00:07:40.857 --rc geninfo_all_blocks=1 00:07:40.857 --rc geninfo_unexecuted_blocks=1 00:07:40.857 00:07:40.857 ' 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.857 --rc genhtml_branch_coverage=1 00:07:40.857 --rc genhtml_function_coverage=1 00:07:40.857 --rc genhtml_legend=1 00:07:40.857 --rc geninfo_all_blocks=1 00:07:40.857 --rc geninfo_unexecuted_blocks=1 00:07:40.857 00:07:40.857 ' 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.857 --rc genhtml_branch_coverage=1 00:07:40.857 --rc genhtml_function_coverage=1 00:07:40.857 --rc genhtml_legend=1 00:07:40.857 --rc geninfo_all_blocks=1 00:07:40.857 --rc geninfo_unexecuted_blocks=1 00:07:40.857 00:07:40.857 ' 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.857 --rc genhtml_branch_coverage=1 00:07:40.857 --rc genhtml_function_coverage=1 00:07:40.857 --rc genhtml_legend=1 00:07:40.857 --rc geninfo_all_blocks=1 00:07:40.857 --rc geninfo_unexecuted_blocks=1 00:07:40.857 00:07:40.857 ' 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.857 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.858 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.858 11:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.429 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:47.430 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:47.430 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:47.430 Found net devices under 0000:da:00.0: mlx_0_0 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:47.430 Found net devices under 0000:da:00.1: mlx_0_1 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:47.430 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:47.430 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:47.430 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:07:47.430 altname enp218s0f0np0 00:07:47.430 altname ens818f0np0 00:07:47.430 inet 192.168.100.8/24 scope global mlx_0_0 00:07:47.430 valid_lft forever preferred_lft forever 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:47.431 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:47.431 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:07:47.431 altname enp218s0f1np1 00:07:47.431 altname ens818f1np1 00:07:47.431 inet 192.168.100.9/24 scope global mlx_0_1 00:07:47.431 valid_lft forever preferred_lft forever 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:47.431 192.168.100.9' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:47.431 192.168.100.9' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:47.431 192.168.100.9' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3104170 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3104170 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3104170 ']' 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.431 [2024-12-09 11:46:54.736350] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:47.431 [2024-12-09 11:46:54.736393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.431 [2024-12-09 11:46:54.812011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.431 [2024-12-09 11:46:54.852261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.431 [2024-12-09 11:46:54.852298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.431 [2024-12-09 11:46:54.852305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.431 [2024-12-09 11:46:54.852310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.431 [2024-12-09 11:46:54.852315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.431 [2024-12-09 11:46:54.853876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.431 [2024-12-09 11:46:54.853987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.431 [2024-12-09 11:46:54.854092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.431 [2024-12-09 11:46:54.854094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.431 11:46:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.431 [2024-12-09 11:46:55.025024] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2176990/0x217ae80) succeed. 00:07:47.431 [2024-12-09 11:46:55.036147] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2178020/0x21bc520) succeed. 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.432 Malloc0 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.432 [2024-12-09 11:46:55.213911] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3104196 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3104198 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:47.432 { 00:07:47.432 "params": { 00:07:47.432 "name": "Nvme$subsystem", 00:07:47.432 "trtype": "$TEST_TRANSPORT", 00:07:47.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.432 "adrfam": "ipv4", 00:07:47.432 "trsvcid": "$NVMF_PORT", 00:07:47.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.432 "hdgst": ${hdgst:-false}, 00:07:47.432 "ddgst": ${ddgst:-false} 00:07:47.432 }, 00:07:47.432 "method": "bdev_nvme_attach_controller" 00:07:47.432 } 00:07:47.432 EOF 00:07:47.432 )") 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3104200 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:47.432 { 00:07:47.432 "params": { 00:07:47.432 "name": "Nvme$subsystem", 00:07:47.432 "trtype": "$TEST_TRANSPORT", 00:07:47.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.432 "adrfam": "ipv4", 00:07:47.432 "trsvcid": "$NVMF_PORT", 00:07:47.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.432 "hdgst": ${hdgst:-false}, 00:07:47.432 "ddgst": ${ddgst:-false} 00:07:47.432 }, 00:07:47.432 "method": "bdev_nvme_attach_controller" 00:07:47.432 } 00:07:47.432 EOF 00:07:47.432 )") 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3104203 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:47.432 { 00:07:47.432 "params": { 00:07:47.432 "name": "Nvme$subsystem", 00:07:47.432 "trtype": "$TEST_TRANSPORT", 00:07:47.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.432 "adrfam": "ipv4", 00:07:47.432 "trsvcid": "$NVMF_PORT", 00:07:47.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.432 "hdgst": ${hdgst:-false}, 00:07:47.432 "ddgst": ${ddgst:-false} 00:07:47.432 }, 00:07:47.432 "method": "bdev_nvme_attach_controller" 00:07:47.432 } 00:07:47.432 EOF 00:07:47.432 )") 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:47.432 { 00:07:47.432 "params": { 00:07:47.432 "name": "Nvme$subsystem", 00:07:47.432 "trtype": "$TEST_TRANSPORT", 00:07:47.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.432 "adrfam": "ipv4", 00:07:47.432 "trsvcid": "$NVMF_PORT", 00:07:47.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.432 "hdgst": ${hdgst:-false}, 00:07:47.432 "ddgst": ${ddgst:-false} 00:07:47.432 }, 00:07:47.432 "method": "bdev_nvme_attach_controller" 00:07:47.432 } 00:07:47.432 EOF 00:07:47.432 )") 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3104196 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:47.432 "params": { 00:07:47.432 "name": "Nvme1", 00:07:47.432 "trtype": "rdma", 00:07:47.432 "traddr": "192.168.100.8", 00:07:47.432 "adrfam": "ipv4", 00:07:47.432 "trsvcid": "4420", 00:07:47.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.432 "hdgst": false, 00:07:47.432 "ddgst": false 00:07:47.432 }, 00:07:47.432 "method": "bdev_nvme_attach_controller" 00:07:47.432 }' 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:47.432 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:47.432 "params": { 00:07:47.432 "name": "Nvme1", 00:07:47.432 "trtype": "rdma", 00:07:47.432 "traddr": "192.168.100.8", 00:07:47.432 "adrfam": "ipv4", 00:07:47.432 "trsvcid": "4420", 00:07:47.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.433 "hdgst": false, 00:07:47.433 "ddgst": false 00:07:47.433 }, 00:07:47.433 "method": "bdev_nvme_attach_controller" 00:07:47.433 }' 00:07:47.433 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:47.433 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:47.433 "params": { 00:07:47.433 "name": "Nvme1", 00:07:47.433 "trtype": "rdma", 00:07:47.433 "traddr": "192.168.100.8", 00:07:47.433 "adrfam": "ipv4", 00:07:47.433 "trsvcid": "4420", 00:07:47.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.433 "hdgst": false, 00:07:47.433 "ddgst": false 00:07:47.433 }, 00:07:47.433 "method": "bdev_nvme_attach_controller" 00:07:47.433 }' 00:07:47.433 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:47.433 11:46:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:47.433 "params": { 00:07:47.433 "name": "Nvme1", 00:07:47.433 "trtype": "rdma", 00:07:47.433 "traddr": "192.168.100.8", 00:07:47.433 "adrfam": "ipv4", 00:07:47.433 "trsvcid": "4420", 00:07:47.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.433 "hdgst": false, 00:07:47.433 "ddgst": false 00:07:47.433 }, 00:07:47.433 "method": "bdev_nvme_attach_controller" 00:07:47.433 }' 00:07:47.433 [2024-12-09 11:46:55.263157] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:47.433 [2024-12-09 11:46:55.263207] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:47.433 [2024-12-09 11:46:55.263772] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:47.433 [2024-12-09 11:46:55.263819] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:47.433 [2024-12-09 11:46:55.265131] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:47.433 [2024-12-09 11:46:55.265174] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:47.433 [2024-12-09 11:46:55.265978] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:47.433 [2024-12-09 11:46:55.266015] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:47.433 [2024-12-09 11:46:55.455777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.691 [2024-12-09 11:46:55.498382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:47.691 [2024-12-09 11:46:55.548948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.691 [2024-12-09 11:46:55.589321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:47.691 [2024-12-09 11:46:55.663829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.691 [2024-12-09 11:46:55.714997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.691 [2024-12-09 11:46:55.715199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:47.948 [2024-12-09 11:46:55.758057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:47.948 Running I/O for 1 seconds... 00:07:47.948 Running I/O for 1 seconds... 00:07:47.948 Running I/O for 1 seconds... 00:07:47.948 Running I/O for 1 seconds... 00:07:48.882 17082.00 IOPS, 66.73 MiB/s 00:07:48.882 Latency(us) 00:07:48.882 [2024-12-09T10:46:56.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.882 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:48.882 Nvme1n1 : 1.01 17118.08 66.87 0.00 0.00 7453.50 4056.99 13856.18 00:07:48.882 [2024-12-09T10:46:56.935Z] =================================================================================================================== 00:07:48.882 [2024-12-09T10:46:56.935Z] Total : 17118.08 66.87 0.00 0.00 7453.50 4056.99 13856.18 00:07:48.882 17024.00 IOPS, 66.50 MiB/s 00:07:48.882 Latency(us) 00:07:48.882 [2024-12-09T10:46:56.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.882 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:48.882 Nvme1n1 : 1.01 17080.91 66.72 0.00 0.00 7472.92 3963.37 15291.73 00:07:48.882 [2024-12-09T10:46:56.935Z] =================================================================================================================== 00:07:48.882 [2024-12-09T10:46:56.935Z] Total : 17080.91 66.72 0.00 0.00 7472.92 3963.37 15291.73 00:07:48.882 248136.00 IOPS, 969.28 MiB/s 00:07:48.882 Latency(us) 00:07:48.882 [2024-12-09T10:46:56.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.882 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:48.882 Nvme1n1 : 1.00 247764.92 967.83 0.00 0.00 513.41 218.45 1966.08 00:07:48.882 [2024-12-09T10:46:56.935Z] =================================================================================================================== 00:07:48.882 [2024-12-09T10:46:56.935Z] Total : 247764.92 967.83 0.00 0.00 513.41 218.45 1966.08 00:07:48.882 14756.00 IOPS, 57.64 MiB/s 00:07:48.882 Latency(us) 00:07:48.882 [2024-12-09T10:46:56.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.882 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:48.882 Nvme1n1 : 1.01 14845.03 57.99 0.00 0.00 8601.55 3323.61 19099.06 00:07:48.882 [2024-12-09T10:46:56.935Z] =================================================================================================================== 00:07:48.882 [2024-12-09T10:46:56.935Z] Total : 14845.03 57.99 0.00 0.00 8601.55 3323.61 19099.06 00:07:49.140 11:46:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3104198 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3104200 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3104203 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:49.140 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:49.140 rmmod nvme_rdma 00:07:49.140 rmmod nvme_fabrics 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3104170 ']' 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3104170 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3104170 ']' 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3104170 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3104170 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3104170' 00:07:49.141 killing process with pid 3104170 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3104170 00:07:49.141 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3104170 00:07:49.399 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:49.399 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:49.399 00:07:49.399 real 0m8.814s 00:07:49.399 user 0m17.080s 00:07:49.399 sys 0m5.727s 00:07:49.399 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.399 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.399 ************************************ 00:07:49.399 END TEST nvmf_bdev_io_wait 00:07:49.399 ************************************ 00:07:49.399 11:46:57 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:07:49.399 11:46:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.399 11:46:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.399 11:46:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.659 ************************************ 00:07:49.659 START TEST nvmf_queue_depth 00:07:49.659 ************************************ 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:07:49.659 * Looking for test storage... 00:07:49.659 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:49.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.659 --rc genhtml_branch_coverage=1 00:07:49.659 --rc genhtml_function_coverage=1 00:07:49.659 --rc genhtml_legend=1 00:07:49.659 --rc geninfo_all_blocks=1 00:07:49.659 --rc geninfo_unexecuted_blocks=1 00:07:49.659 00:07:49.659 ' 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:49.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.659 --rc genhtml_branch_coverage=1 00:07:49.659 --rc genhtml_function_coverage=1 00:07:49.659 --rc genhtml_legend=1 00:07:49.659 --rc geninfo_all_blocks=1 00:07:49.659 --rc geninfo_unexecuted_blocks=1 00:07:49.659 00:07:49.659 ' 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:49.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.659 --rc genhtml_branch_coverage=1 00:07:49.659 --rc genhtml_function_coverage=1 00:07:49.659 --rc genhtml_legend=1 00:07:49.659 --rc geninfo_all_blocks=1 00:07:49.659 --rc geninfo_unexecuted_blocks=1 00:07:49.659 00:07:49.659 ' 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:49.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.659 --rc genhtml_branch_coverage=1 00:07:49.659 --rc genhtml_function_coverage=1 00:07:49.659 --rc genhtml_legend=1 00:07:49.659 --rc geninfo_all_blocks=1 00:07:49.659 --rc geninfo_unexecuted_blocks=1 00:07:49.659 00:07:49.659 ' 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.659 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.660 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:49.660 11:46:57 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:56.230 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:56.230 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:56.231 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:56.231 Found net devices under 0000:da:00.0: mlx_0_0 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:56.231 Found net devices under 0000:da:00.1: mlx_0_1 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:56.231 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:56.231 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:07:56.231 altname enp218s0f0np0 00:07:56.231 altname ens818f0np0 00:07:56.231 inet 192.168.100.8/24 scope global mlx_0_0 00:07:56.231 valid_lft forever preferred_lft forever 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:56.231 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:56.231 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:07:56.231 altname enp218s0f1np1 00:07:56.231 altname ens818f1np1 00:07:56.231 inet 192.168.100.9/24 scope global mlx_0_1 00:07:56.231 valid_lft forever preferred_lft forever 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:56.231 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:56.232 192.168.100.9' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:56.232 192.168.100.9' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:56.232 192.168.100.9' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3107751 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3107751 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3107751 ']' 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 [2024-12-09 11:47:03.644066] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:56.232 [2024-12-09 11:47:03.644117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.232 [2024-12-09 11:47:03.726358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.232 [2024-12-09 11:47:03.767339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.232 [2024-12-09 11:47:03.767374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.232 [2024-12-09 11:47:03.767381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.232 [2024-12-09 11:47:03.767388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.232 [2024-12-09 11:47:03.767393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.232 [2024-12-09 11:47:03.767928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 [2024-12-09 11:47:03.924329] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20db930/0x20dfe20) succeed. 00:07:56.232 [2024-12-09 11:47:03.934152] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20dcde0/0x21214c0) succeed. 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.232 11:47:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 Malloc0 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 [2024-12-09 11:47:04.029041] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3107915 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3107915 /var/tmp/bdevperf.sock 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3107915 ']' 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:56.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.232 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 [2024-12-09 11:47:04.078211] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:07:56.232 [2024-12-09 11:47:04.078249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107915 ] 00:07:56.232 [2024-12-09 11:47:04.153351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.232 [2024-12-09 11:47:04.195582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.491 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.491 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:56.491 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:56.491 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.491 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.491 NVMe0n1 00:07:56.491 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.491 11:47:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:56.491 Running I/O for 10 seconds... 00:07:58.796 16552.00 IOPS, 64.66 MiB/s [2024-12-09T10:47:07.781Z] 16964.50 IOPS, 66.27 MiB/s [2024-12-09T10:47:08.715Z] 17109.00 IOPS, 66.83 MiB/s [2024-12-09T10:47:09.648Z] 17152.00 IOPS, 67.00 MiB/s [2024-12-09T10:47:10.582Z] 17203.20 IOPS, 67.20 MiB/s [2024-12-09T10:47:11.516Z] 17237.33 IOPS, 67.33 MiB/s [2024-12-09T10:47:12.895Z] 17261.71 IOPS, 67.43 MiB/s [2024-12-09T10:47:13.831Z] 17280.00 IOPS, 67.50 MiB/s [2024-12-09T10:47:14.768Z] 17294.22 IOPS, 67.56 MiB/s [2024-12-09T10:47:14.768Z] 17305.60 IOPS, 67.60 MiB/s 00:08:06.715 Latency(us) 00:08:06.715 [2024-12-09T10:47:14.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.715 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:06.715 Verification LBA range: start 0x0 length 0x4000 00:08:06.715 NVMe0n1 : 10.05 17322.68 67.67 0.00 0.00 58966.01 22968.81 37199.48 00:08:06.715 [2024-12-09T10:47:14.768Z] =================================================================================================================== 00:08:06.715 [2024-12-09T10:47:14.768Z] Total : 17322.68 67.67 0.00 0.00 58966.01 22968.81 37199.48 00:08:06.715 { 00:08:06.715 "results": [ 00:08:06.715 { 00:08:06.715 "job": "NVMe0n1", 00:08:06.715 "core_mask": "0x1", 00:08:06.715 "workload": "verify", 00:08:06.715 "status": "finished", 00:08:06.715 "verify_range": { 00:08:06.715 "start": 0, 00:08:06.715 "length": 16384 00:08:06.715 }, 00:08:06.715 "queue_depth": 1024, 00:08:06.715 "io_size": 4096, 00:08:06.715 "runtime": 10.049252, 00:08:06.715 "iops": 17322.682325012847, 00:08:06.715 "mibps": 67.66672783208143, 00:08:06.715 "io_failed": 0, 00:08:06.715 "io_timeout": 0, 00:08:06.715 "avg_latency_us": 58966.014924369745, 00:08:06.715 "min_latency_us": 22968.80761904762, 00:08:06.715 "max_latency_us": 37199.4819047619 00:08:06.715 } 00:08:06.715 ], 00:08:06.715 "core_count": 1 00:08:06.715 } 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3107915 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3107915 ']' 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3107915 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3107915 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3107915' 00:08:06.715 killing process with pid 3107915 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3107915 00:08:06.715 Received shutdown signal, test time was about 10.000000 seconds 00:08:06.715 00:08:06.715 Latency(us) 00:08:06.715 [2024-12-09T10:47:14.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.715 [2024-12-09T10:47:14.768Z] =================================================================================================================== 00:08:06.715 [2024-12-09T10:47:14.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:06.715 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3107915 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:06.975 rmmod nvme_rdma 00:08:06.975 rmmod nvme_fabrics 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3107751 ']' 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3107751 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3107751 ']' 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3107751 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3107751 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3107751' 00:08:06.975 killing process with pid 3107751 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3107751 00:08:06.975 11:47:14 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3107751 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:07.234 00:08:07.234 real 0m17.641s 00:08:07.234 user 0m24.105s 00:08:07.234 sys 0m5.001s 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.234 ************************************ 00:08:07.234 END TEST nvmf_queue_depth 00:08:07.234 ************************************ 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.234 ************************************ 00:08:07.234 START TEST nvmf_target_multipath 00:08:07.234 ************************************ 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:07.234 * Looking for test storage... 00:08:07.234 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:07.234 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:07.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.496 --rc genhtml_branch_coverage=1 00:08:07.496 --rc genhtml_function_coverage=1 00:08:07.496 --rc genhtml_legend=1 00:08:07.496 --rc geninfo_all_blocks=1 00:08:07.496 --rc geninfo_unexecuted_blocks=1 00:08:07.496 00:08:07.496 ' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:07.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.496 --rc genhtml_branch_coverage=1 00:08:07.496 --rc genhtml_function_coverage=1 00:08:07.496 --rc genhtml_legend=1 00:08:07.496 --rc geninfo_all_blocks=1 00:08:07.496 --rc geninfo_unexecuted_blocks=1 00:08:07.496 00:08:07.496 ' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:07.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.496 --rc genhtml_branch_coverage=1 00:08:07.496 --rc genhtml_function_coverage=1 00:08:07.496 --rc genhtml_legend=1 00:08:07.496 --rc geninfo_all_blocks=1 00:08:07.496 --rc geninfo_unexecuted_blocks=1 00:08:07.496 00:08:07.496 ' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:07.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.496 --rc genhtml_branch_coverage=1 00:08:07.496 --rc genhtml_function_coverage=1 00:08:07.496 --rc genhtml_legend=1 00:08:07.496 --rc geninfo_all_blocks=1 00:08:07.496 --rc geninfo_unexecuted_blocks=1 00:08:07.496 00:08:07.496 ' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.496 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.496 11:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:14.070 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:14.070 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:14.070 Found net devices under 0000:da:00.0: mlx_0_0 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:14.070 Found net devices under 0000:da:00.1: mlx_0_1 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:08:14.070 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:14.071 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:14.071 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:14.071 altname enp218s0f0np0 00:08:14.071 altname ens818f0np0 00:08:14.071 inet 192.168.100.8/24 scope global mlx_0_0 00:08:14.071 valid_lft forever preferred_lft forever 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:14.071 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:14.071 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:14.071 altname enp218s0f1np1 00:08:14.071 altname ens818f1np1 00:08:14.071 inet 192.168.100.9/24 scope global mlx_0_1 00:08:14.071 valid_lft forever preferred_lft forever 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:14.071 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:14.072 192.168.100.9' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:14.072 192.168.100.9' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:14.072 192.168.100.9' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:08:14.072 run this test only with TCP transport for now 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:14.072 rmmod nvme_rdma 00:08:14.072 rmmod nvme_fabrics 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:08:14.072 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:14.073 00:08:14.073 real 0m6.174s 00:08:14.073 user 0m1.801s 00:08:14.073 sys 0m4.507s 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:14.073 ************************************ 00:08:14.073 END TEST nvmf_target_multipath 00:08:14.073 ************************************ 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.073 ************************************ 00:08:14.073 START TEST nvmf_zcopy 00:08:14.073 ************************************ 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:08:14.073 * Looking for test storage... 00:08:14.073 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.073 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:14.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.073 --rc genhtml_branch_coverage=1 00:08:14.073 --rc genhtml_function_coverage=1 00:08:14.073 --rc genhtml_legend=1 00:08:14.074 --rc geninfo_all_blocks=1 00:08:14.074 --rc geninfo_unexecuted_blocks=1 00:08:14.074 00:08:14.074 ' 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:14.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.074 --rc genhtml_branch_coverage=1 00:08:14.074 --rc genhtml_function_coverage=1 00:08:14.074 --rc genhtml_legend=1 00:08:14.074 --rc geninfo_all_blocks=1 00:08:14.074 --rc geninfo_unexecuted_blocks=1 00:08:14.074 00:08:14.074 ' 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:14.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.074 --rc genhtml_branch_coverage=1 00:08:14.074 --rc genhtml_function_coverage=1 00:08:14.074 --rc genhtml_legend=1 00:08:14.074 --rc geninfo_all_blocks=1 00:08:14.074 --rc geninfo_unexecuted_blocks=1 00:08:14.074 00:08:14.074 ' 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:14.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.074 --rc genhtml_branch_coverage=1 00:08:14.074 --rc genhtml_function_coverage=1 00:08:14.074 --rc genhtml_legend=1 00:08:14.074 --rc geninfo_all_blocks=1 00:08:14.074 --rc geninfo_unexecuted_blocks=1 00:08:14.074 00:08:14.074 ' 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.074 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.074 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.075 11:47:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:19.349 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:19.349 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:19.349 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:19.350 Found net devices under 0000:da:00.0: mlx_0_0 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:19.350 Found net devices under 0000:da:00.1: mlx_0_1 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:19.350 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:19.610 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:19.610 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:19.610 altname enp218s0f0np0 00:08:19.610 altname ens818f0np0 00:08:19.610 inet 192.168.100.8/24 scope global mlx_0_0 00:08:19.610 valid_lft forever preferred_lft forever 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:19.610 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:19.610 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:19.610 altname enp218s0f1np1 00:08:19.610 altname ens818f1np1 00:08:19.610 inet 192.168.100.9/24 scope global mlx_0_1 00:08:19.610 valid_lft forever preferred_lft forever 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:19.610 192.168.100.9' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:19.610 192.168.100.9' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:19.610 192.168.100.9' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3116008 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3116008 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3116008 ']' 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.610 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.610 [2024-12-09 11:47:27.582440] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:08:19.610 [2024-12-09 11:47:27.582481] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.610 [2024-12-09 11:47:27.641379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.869 [2024-12-09 11:47:27.683370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.869 [2024-12-09 11:47:27.683400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.869 [2024-12-09 11:47:27.683406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.869 [2024-12-09 11:47:27.683412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.869 [2024-12-09 11:47:27.683417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.869 [2024-12-09 11:47:27.684020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:08:19.869 Unsupported transport: rdma 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:19.869 nvmf_trace.0 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:19.869 rmmod nvme_rdma 00:08:19.869 rmmod nvme_fabrics 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:19.869 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3116008 ']' 00:08:19.870 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3116008 00:08:19.870 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3116008 ']' 00:08:19.870 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3116008 00:08:20.129 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:20.129 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.129 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3116008 00:08:20.129 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:20.129 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:20.129 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3116008' 00:08:20.129 killing process with pid 3116008 00:08:20.129 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3116008 00:08:20.129 11:47:27 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3116008 00:08:20.129 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.129 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:20.129 00:08:20.129 real 0m6.690s 00:08:20.129 user 0m2.528s 00:08:20.129 sys 0m4.686s 00:08:20.129 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.129 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.129 ************************************ 00:08:20.129 END TEST nvmf_zcopy 00:08:20.129 ************************************ 00:08:20.129 11:47:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:20.129 11:47:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.129 11:47:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.129 11:47:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.388 ************************************ 00:08:20.388 START TEST nvmf_nmic 00:08:20.388 ************************************ 00:08:20.388 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:20.388 * Looking for test storage... 00:08:20.388 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:20.388 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:20.388 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:20.388 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.388 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.389 --rc genhtml_branch_coverage=1 00:08:20.389 --rc genhtml_function_coverage=1 00:08:20.389 --rc genhtml_legend=1 00:08:20.389 --rc geninfo_all_blocks=1 00:08:20.389 --rc geninfo_unexecuted_blocks=1 00:08:20.389 00:08:20.389 ' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.389 --rc genhtml_branch_coverage=1 00:08:20.389 --rc genhtml_function_coverage=1 00:08:20.389 --rc genhtml_legend=1 00:08:20.389 --rc geninfo_all_blocks=1 00:08:20.389 --rc geninfo_unexecuted_blocks=1 00:08:20.389 00:08:20.389 ' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.389 --rc genhtml_branch_coverage=1 00:08:20.389 --rc genhtml_function_coverage=1 00:08:20.389 --rc genhtml_legend=1 00:08:20.389 --rc geninfo_all_blocks=1 00:08:20.389 --rc geninfo_unexecuted_blocks=1 00:08:20.389 00:08:20.389 ' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.389 --rc genhtml_branch_coverage=1 00:08:20.389 --rc genhtml_function_coverage=1 00:08:20.389 --rc genhtml_legend=1 00:08:20.389 --rc geninfo_all_blocks=1 00:08:20.389 --rc geninfo_unexecuted_blocks=1 00:08:20.389 00:08:20.389 ' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.389 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.389 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.390 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.390 11:47:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:26.959 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:26.960 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:26.960 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:26.960 Found net devices under 0000:da:00.0: mlx_0_0 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:26.960 Found net devices under 0000:da:00.1: mlx_0_1 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:26.960 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:26.960 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:26.960 altname enp218s0f0np0 00:08:26.960 altname ens818f0np0 00:08:26.960 inet 192.168.100.8/24 scope global mlx_0_0 00:08:26.960 valid_lft forever preferred_lft forever 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:26.960 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:26.960 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:26.960 altname enp218s0f1np1 00:08:26.960 altname ens818f1np1 00:08:26.960 inet 192.168.100.9/24 scope global mlx_0_1 00:08:26.960 valid_lft forever preferred_lft forever 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.960 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:26.961 192.168.100.9' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:26.961 192.168.100.9' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:26.961 192.168.100.9' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3119288 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3119288 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3119288 ']' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 [2024-12-09 11:47:34.348726] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:08:26.961 [2024-12-09 11:47:34.348769] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.961 [2024-12-09 11:47:34.427380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.961 [2024-12-09 11:47:34.470402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.961 [2024-12-09 11:47:34.470437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.961 [2024-12-09 11:47:34.470444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.961 [2024-12-09 11:47:34.470450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.961 [2024-12-09 11:47:34.470455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.961 [2024-12-09 11:47:34.471959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.961 [2024-12-09 11:47:34.472071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.961 [2024-12-09 11:47:34.472198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.961 [2024-12-09 11:47:34.472199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 [2024-12-09 11:47:34.637798] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x186b940/0x186fe30) succeed. 00:08:26.961 [2024-12-09 11:47:34.649218] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x186cfd0/0x18b14d0) succeed. 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 Malloc0 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 [2024-12-09 11:47:34.826503] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:26.961 test case1: single bdev can't be used in multiple subsystems 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 [2024-12-09 11:47:34.850271] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:26.961 [2024-12-09 11:47:34.850291] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:26.961 [2024-12-09 11:47:34.850298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.961 request: 00:08:26.961 { 00:08:26.961 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:26.961 "namespace": { 00:08:26.961 "bdev_name": "Malloc0", 00:08:26.961 "no_auto_visible": false, 00:08:26.961 "hide_metadata": false 00:08:26.961 }, 00:08:26.961 "method": "nvmf_subsystem_add_ns", 00:08:26.961 "req_id": 1 00:08:26.961 } 00:08:26.961 Got JSON-RPC error response 00:08:26.961 response: 00:08:26.961 { 00:08:26.961 "code": -32602, 00:08:26.961 "message": "Invalid parameters" 00:08:26.961 } 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:26.961 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:26.962 Adding namespace failed - expected result. 00:08:26.962 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:26.962 test case2: host connect to nvmf target in multiple paths 00:08:26.962 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:08:26.962 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.962 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.962 [2024-12-09 11:47:34.862318] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:08:26.962 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.962 11:47:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:27.897 11:47:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:08:28.834 11:47:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.834 11:47:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:28.834 11:47:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.834 11:47:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:28.834 11:47:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:31.365 11:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:31.365 11:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:31.365 11:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:31.365 11:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:31.365 11:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:31.365 11:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:31.365 11:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:31.365 [global] 00:08:31.365 thread=1 00:08:31.365 invalidate=1 00:08:31.365 rw=write 00:08:31.365 time_based=1 00:08:31.365 runtime=1 00:08:31.365 ioengine=libaio 00:08:31.365 direct=1 00:08:31.365 bs=4096 00:08:31.365 iodepth=1 00:08:31.365 norandommap=0 00:08:31.365 numjobs=1 00:08:31.365 00:08:31.365 verify_dump=1 00:08:31.365 verify_backlog=512 00:08:31.365 verify_state_save=0 00:08:31.365 do_verify=1 00:08:31.365 verify=crc32c-intel 00:08:31.365 [job0] 00:08:31.365 filename=/dev/nvme0n1 00:08:31.365 Could not set queue depth (nvme0n1) 00:08:31.365 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.365 fio-3.35 00:08:31.365 Starting 1 thread 00:08:32.299 00:08:32.299 job0: (groupid=0, jobs=1): err= 0: pid=3120356: Mon Dec 9 11:47:40 2024 00:08:32.299 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:08:32.299 slat (nsec): min=6226, max=26507, avg=6992.32, stdev=719.51 00:08:32.299 clat (usec): min=44, max=175, avg=60.59, stdev= 5.10 00:08:32.299 lat (usec): min=54, max=182, avg=67.58, stdev= 5.17 00:08:32.299 clat percentiles (usec): 00:08:32.299 | 1.00th=[ 52], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 57], 00:08:32.299 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:08:32.299 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 70], 00:08:32.299 | 99.00th=[ 74], 99.50th=[ 76], 99.90th=[ 81], 99.95th=[ 94], 00:08:32.299 | 99.99th=[ 176] 00:08:32.299 write: IOPS=7220, BW=28.2MiB/s (29.6MB/s)(28.2MiB/1001msec); 0 zone resets 00:08:32.299 slat (nsec): min=5729, max=44469, avg=9059.73, stdev=1115.26 00:08:32.299 clat (usec): min=43, max=169, avg=58.54, stdev= 5.34 00:08:32.299 lat (usec): min=55, max=178, avg=67.60, stdev= 5.57 00:08:32.299 clat percentiles (usec): 00:08:32.299 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 55], 00:08:32.299 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 60], 00:08:32.299 | 70.00th=[ 62], 80.00th=[ 63], 90.00th=[ 66], 95.00th=[ 68], 00:08:32.299 | 99.00th=[ 72], 99.50th=[ 74], 99.90th=[ 91], 99.95th=[ 103], 00:08:32.299 | 99.99th=[ 169] 00:08:32.299 bw ( KiB/s): min=28672, max=28672, per=99.27%, avg=28672.00, stdev= 0.00, samples=1 00:08:32.299 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:08:32.299 lat (usec) : 50=1.14%, 100=98.79%, 250=0.07% 00:08:32.299 cpu : usr=7.20%, sys=16.10%, ctx=14396, majf=0, minf=1 00:08:32.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.299 issued rwts: total=7168,7228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.299 00:08:32.299 Run status group 0 (all jobs): 00:08:32.299 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:08:32.299 WRITE: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=28.2MiB (29.6MB), run=1001-1001msec 00:08:32.299 00:08:32.299 Disk stats (read/write): 00:08:32.299 nvme0n1: ios=6307/6656, merge=0/0, ticks=344/354, in_queue=698, util=90.48% 00:08:32.299 11:47:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.202 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:34.202 rmmod nvme_rdma 00:08:34.462 rmmod nvme_fabrics 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3119288 ']' 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3119288 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3119288 ']' 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3119288 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119288 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119288' 00:08:34.462 killing process with pid 3119288 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3119288 00:08:34.462 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3119288 00:08:34.722 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.722 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:34.722 00:08:34.722 real 0m14.430s 00:08:34.722 user 0m39.801s 00:08:34.722 sys 0m5.179s 00:08:34.722 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.722 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.722 ************************************ 00:08:34.722 END TEST nvmf_nmic 00:08:34.722 ************************************ 00:08:34.722 11:47:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:34.722 11:47:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.722 11:47:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.722 11:47:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.722 ************************************ 00:08:34.722 START TEST nvmf_fio_target 00:08:34.722 ************************************ 00:08:34.722 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:34.722 * Looking for test storage... 00:08:34.982 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:34.982 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:34.982 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:34.982 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:34.982 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:34.982 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.982 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.983 --rc genhtml_branch_coverage=1 00:08:34.983 --rc genhtml_function_coverage=1 00:08:34.983 --rc genhtml_legend=1 00:08:34.983 --rc geninfo_all_blocks=1 00:08:34.983 --rc geninfo_unexecuted_blocks=1 00:08:34.983 00:08:34.983 ' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.983 --rc genhtml_branch_coverage=1 00:08:34.983 --rc genhtml_function_coverage=1 00:08:34.983 --rc genhtml_legend=1 00:08:34.983 --rc geninfo_all_blocks=1 00:08:34.983 --rc geninfo_unexecuted_blocks=1 00:08:34.983 00:08:34.983 ' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.983 --rc genhtml_branch_coverage=1 00:08:34.983 --rc genhtml_function_coverage=1 00:08:34.983 --rc genhtml_legend=1 00:08:34.983 --rc geninfo_all_blocks=1 00:08:34.983 --rc geninfo_unexecuted_blocks=1 00:08:34.983 00:08:34.983 ' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.983 --rc genhtml_branch_coverage=1 00:08:34.983 --rc genhtml_function_coverage=1 00:08:34.983 --rc genhtml_legend=1 00:08:34.983 --rc geninfo_all_blocks=1 00:08:34.983 --rc geninfo_unexecuted_blocks=1 00:08:34.983 00:08:34.983 ' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.983 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.983 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.984 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.984 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.984 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.984 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.984 11:47:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:41.558 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:41.558 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:41.558 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:41.559 Found net devices under 0000:da:00.0: mlx_0_0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:41.559 Found net devices under 0000:da:00.1: mlx_0_1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:41.559 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.559 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:41.559 altname enp218s0f0np0 00:08:41.559 altname ens818f0np0 00:08:41.559 inet 192.168.100.8/24 scope global mlx_0_0 00:08:41.559 valid_lft forever preferred_lft forever 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:41.559 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.559 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:41.559 altname enp218s0f1np1 00:08:41.559 altname ens818f1np1 00:08:41.559 inet 192.168.100.9/24 scope global mlx_0_1 00:08:41.559 valid_lft forever preferred_lft forever 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:41.559 192.168.100.9' 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:41.559 192.168.100.9' 00:08:41.559 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:41.560 192.168.100.9' 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3124067 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3124067 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3124067 ']' 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.560 11:47:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:41.560 [2024-12-09 11:47:48.872283] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:08:41.560 [2024-12-09 11:47:48.872330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.560 [2024-12-09 11:47:48.947985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.560 [2024-12-09 11:47:48.987986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.560 [2024-12-09 11:47:48.988024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.560 [2024-12-09 11:47:48.988031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.560 [2024-12-09 11:47:48.988037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.560 [2024-12-09 11:47:48.988043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.560 [2024-12-09 11:47:48.989651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.560 [2024-12-09 11:47:48.989760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.560 [2024-12-09 11:47:48.989876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.560 [2024-12-09 11:47:48.989877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.560 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.560 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:41.560 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:41.560 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:41.560 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:41.560 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.560 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:41.560 [2024-12-09 11:47:49.320624] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1223940/0x1227e30) succeed. 00:08:41.560 [2024-12-09 11:47:49.332046] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1224fd0/0x12694d0) succeed. 00:08:41.560 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.819 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:41.819 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.078 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:42.078 11:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.338 11:47:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:42.338 11:47:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.338 11:47:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:42.338 11:47:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:42.596 11:47:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.855 11:47:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:42.855 11:47:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.114 11:47:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:43.114 11:47:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.374 11:47:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:43.374 11:47:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:43.374 11:47:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:43.632 11:47:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:43.632 11:47:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.891 11:47:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:43.891 11:47:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.149 11:47:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:44.149 [2024-12-09 11:47:52.200426] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:44.408 11:47:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:44.408 11:47:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:44.666 11:47:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:45.602 11:47:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:45.602 11:47:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:45.602 11:47:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.602 11:47:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:45.602 11:47:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:45.602 11:47:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:48.134 11:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:48.134 11:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:48.134 11:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:48.134 11:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:48.134 11:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:48.134 11:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:48.134 11:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:48.134 [global] 00:08:48.134 thread=1 00:08:48.134 invalidate=1 00:08:48.134 rw=write 00:08:48.134 time_based=1 00:08:48.134 runtime=1 00:08:48.134 ioengine=libaio 00:08:48.134 direct=1 00:08:48.134 bs=4096 00:08:48.134 iodepth=1 00:08:48.134 norandommap=0 00:08:48.134 numjobs=1 00:08:48.134 00:08:48.134 verify_dump=1 00:08:48.134 verify_backlog=512 00:08:48.134 verify_state_save=0 00:08:48.134 do_verify=1 00:08:48.134 verify=crc32c-intel 00:08:48.134 [job0] 00:08:48.134 filename=/dev/nvme0n1 00:08:48.134 [job1] 00:08:48.134 filename=/dev/nvme0n2 00:08:48.134 [job2] 00:08:48.134 filename=/dev/nvme0n3 00:08:48.134 [job3] 00:08:48.134 filename=/dev/nvme0n4 00:08:48.134 Could not set queue depth (nvme0n1) 00:08:48.134 Could not set queue depth (nvme0n2) 00:08:48.134 Could not set queue depth (nvme0n3) 00:08:48.134 Could not set queue depth (nvme0n4) 00:08:48.134 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.134 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.134 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.134 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.134 fio-3.35 00:08:48.134 Starting 4 threads 00:08:49.510 00:08:49.510 job0: (groupid=0, jobs=1): err= 0: pid=3125484: Mon Dec 9 11:47:57 2024 00:08:49.510 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:49.510 slat (nsec): min=6010, max=28385, avg=6967.52, stdev=958.01 00:08:49.510 clat (usec): min=70, max=215, avg=146.44, stdev=26.80 00:08:49.510 lat (usec): min=77, max=222, avg=153.41, stdev=26.71 00:08:49.510 clat percentiles (usec): 00:08:49.510 | 1.00th=[ 80], 5.00th=[ 92], 10.00th=[ 102], 20.00th=[ 123], 00:08:49.510 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:08:49.510 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:08:49.510 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 208], 99.95th=[ 215], 00:08:49.510 | 99.99th=[ 217] 00:08:49.510 write: IOPS=3553, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1001msec); 0 zone resets 00:08:49.510 slat (nsec): min=7980, max=38610, avg=9017.77, stdev=1147.64 00:08:49.510 clat (usec): min=67, max=283, avg=136.15, stdev=27.37 00:08:49.510 lat (usec): min=76, max=292, avg=145.17, stdev=27.38 00:08:49.510 clat percentiles (usec): 00:08:49.510 | 1.00th=[ 82], 5.00th=[ 87], 10.00th=[ 95], 20.00th=[ 109], 00:08:49.510 | 30.00th=[ 118], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 149], 00:08:49.510 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 172], 00:08:49.511 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 198], 99.95th=[ 202], 00:08:49.511 | 99.99th=[ 285] 00:08:49.511 bw ( KiB/s): min=13616, max=13616, per=24.50%, avg=13616.00, stdev= 0.00, samples=1 00:08:49.511 iops : min= 3404, max= 3404, avg=3404.00, stdev= 0.00, samples=1 00:08:49.511 lat (usec) : 100=11.62%, 250=88.37%, 500=0.02% 00:08:49.511 cpu : usr=3.60%, sys=7.40%, ctx=6629, majf=0, minf=1 00:08:49.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.511 issued rwts: total=3072,3557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.511 job1: (groupid=0, jobs=1): err= 0: pid=3125485: Mon Dec 9 11:47:57 2024 00:08:49.511 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:49.511 slat (nsec): min=6079, max=23435, avg=7148.82, stdev=1375.58 00:08:49.511 clat (usec): min=69, max=221, avg=147.43, stdev=22.64 00:08:49.511 lat (usec): min=77, max=240, avg=154.58, stdev=22.71 00:08:49.511 clat percentiles (usec): 00:08:49.511 | 1.00th=[ 90], 5.00th=[ 98], 10.00th=[ 114], 20.00th=[ 143], 00:08:49.511 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 149], 60.00th=[ 151], 00:08:49.511 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 174], 95.00th=[ 186], 00:08:49.511 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 217], 99.95th=[ 221], 00:08:49.511 | 99.99th=[ 223] 00:08:49.511 write: IOPS=3521, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1001msec); 0 zone resets 00:08:49.511 slat (nsec): min=7990, max=39932, avg=9060.75, stdev=1072.17 00:08:49.511 clat (usec): min=62, max=276, avg=136.37, stdev=29.90 00:08:49.511 lat (usec): min=71, max=285, avg=145.43, stdev=29.93 00:08:49.511 clat percentiles (usec): 00:08:49.511 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 109], 00:08:49.511 | 30.00th=[ 131], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:08:49.511 | 70.00th=[ 147], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 192], 00:08:49.511 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 208], 99.95th=[ 210], 00:08:49.511 | 99.99th=[ 277] 00:08:49.511 bw ( KiB/s): min=13616, max=13616, per=24.50%, avg=13616.00, stdev= 0.00, samples=1 00:08:49.511 iops : min= 3404, max= 3404, avg=3404.00, stdev= 0.00, samples=1 00:08:49.511 lat (usec) : 100=10.88%, 250=89.10%, 500=0.02% 00:08:49.511 cpu : usr=4.10%, sys=6.70%, ctx=6597, majf=0, minf=1 00:08:49.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.511 issued rwts: total=3072,3525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.511 job2: (groupid=0, jobs=1): err= 0: pid=3125486: Mon Dec 9 11:47:57 2024 00:08:49.511 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:49.511 slat (nsec): min=6177, max=47776, avg=7484.18, stdev=1561.45 00:08:49.511 clat (usec): min=78, max=219, avg=148.12, stdev=24.86 00:08:49.511 lat (usec): min=85, max=227, avg=155.61, stdev=24.92 00:08:49.511 clat percentiles (usec): 00:08:49.511 | 1.00th=[ 91], 5.00th=[ 96], 10.00th=[ 103], 20.00th=[ 137], 00:08:49.511 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:08:49.511 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 188], 00:08:49.511 | 99.00th=[ 204], 99.50th=[ 206], 99.90th=[ 215], 99.95th=[ 219], 00:08:49.511 | 99.99th=[ 221] 00:08:49.511 write: IOPS=3422, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec); 0 zone resets 00:08:49.511 slat (nsec): min=8277, max=47898, avg=9969.97, stdev=2440.91 00:08:49.511 clat (usec): min=76, max=204, avg=138.60, stdev=25.96 00:08:49.511 lat (usec): min=85, max=221, avg=148.57, stdev=26.14 00:08:49.511 clat percentiles (usec): 00:08:49.511 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 95], 20.00th=[ 118], 00:08:49.511 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:08:49.511 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 186], 00:08:49.511 | 99.00th=[ 194], 99.50th=[ 196], 99.90th=[ 202], 99.95th=[ 204], 00:08:49.511 | 99.99th=[ 204] 00:08:49.511 bw ( KiB/s): min=13512, max=13512, per=24.31%, avg=13512.00, stdev= 0.00, samples=1 00:08:49.511 iops : min= 3378, max= 3378, avg=3378.00, stdev= 0.00, samples=1 00:08:49.511 lat (usec) : 100=10.05%, 250=89.95% 00:08:49.511 cpu : usr=3.90%, sys=6.80%, ctx=6499, majf=0, minf=1 00:08:49.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.511 issued rwts: total=3072,3426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.511 job3: (groupid=0, jobs=1): err= 0: pid=3125487: Mon Dec 9 11:47:57 2024 00:08:49.511 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:49.511 slat (nsec): min=6084, max=26468, avg=8071.37, stdev=2461.98 00:08:49.511 clat (usec): min=82, max=220, avg=148.01, stdev=25.24 00:08:49.511 lat (usec): min=89, max=228, avg=156.08, stdev=25.42 00:08:49.511 clat percentiles (usec): 00:08:49.511 | 1.00th=[ 91], 5.00th=[ 96], 10.00th=[ 104], 20.00th=[ 135], 00:08:49.511 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:08:49.511 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 190], 00:08:49.511 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 217], 99.95th=[ 219], 00:08:49.511 | 99.99th=[ 221] 00:08:49.511 write: IOPS=3398, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec); 0 zone resets 00:08:49.511 slat (nsec): min=8353, max=44649, avg=10617.39, stdev=2997.27 00:08:49.511 clat (usec): min=76, max=215, avg=138.10, stdev=21.51 00:08:49.511 lat (usec): min=87, max=229, avg=148.72, stdev=21.65 00:08:49.511 clat percentiles (usec): 00:08:49.511 | 1.00th=[ 86], 5.00th=[ 94], 10.00th=[ 112], 20.00th=[ 121], 00:08:49.511 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:08:49.511 | 70.00th=[ 147], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 172], 00:08:49.511 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 202], 99.95th=[ 212], 00:08:49.511 | 99.99th=[ 217] 00:08:49.511 bw ( KiB/s): min=13488, max=13488, per=24.27%, avg=13488.00, stdev= 0.00, samples=1 00:08:49.511 iops : min= 3372, max= 3372, avg=3372.00, stdev= 0.00, samples=1 00:08:49.511 lat (usec) : 100=7.43%, 250=92.57% 00:08:49.511 cpu : usr=3.30%, sys=7.20%, ctx=6474, majf=0, minf=1 00:08:49.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.511 issued rwts: total=3072,3402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.511 00:08:49.511 Run status group 0 (all jobs): 00:08:49.511 READ: bw=48.0MiB/s (50.3MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1001msec 00:08:49.511 WRITE: bw=54.3MiB/s (56.9MB/s), 13.3MiB/s-13.9MiB/s (13.9MB/s-14.6MB/s), io=54.3MiB (57.0MB), run=1001-1001msec 00:08:49.511 00:08:49.511 Disk stats (read/write): 00:08:49.511 nvme0n1: ios=2610/3004, merge=0/0, ticks=373/399, in_queue=772, util=86.87% 00:08:49.511 nvme0n2: ios=2573/3002, merge=0/0, ticks=366/390, in_queue=756, util=87.53% 00:08:49.511 nvme0n3: ios=2560/2991, merge=0/0, ticks=361/385, in_queue=746, util=89.22% 00:08:49.511 nvme0n4: ios=2560/2969, merge=0/0, ticks=363/404, in_queue=767, util=89.78% 00:08:49.511 11:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:49.511 [global] 00:08:49.511 thread=1 00:08:49.511 invalidate=1 00:08:49.511 rw=randwrite 00:08:49.511 time_based=1 00:08:49.511 runtime=1 00:08:49.511 ioengine=libaio 00:08:49.511 direct=1 00:08:49.511 bs=4096 00:08:49.511 iodepth=1 00:08:49.511 norandommap=0 00:08:49.511 numjobs=1 00:08:49.511 00:08:49.511 verify_dump=1 00:08:49.511 verify_backlog=512 00:08:49.511 verify_state_save=0 00:08:49.511 do_verify=1 00:08:49.511 verify=crc32c-intel 00:08:49.511 [job0] 00:08:49.511 filename=/dev/nvme0n1 00:08:49.511 [job1] 00:08:49.511 filename=/dev/nvme0n2 00:08:49.511 [job2] 00:08:49.511 filename=/dev/nvme0n3 00:08:49.511 [job3] 00:08:49.511 filename=/dev/nvme0n4 00:08:49.511 Could not set queue depth (nvme0n1) 00:08:49.511 Could not set queue depth (nvme0n2) 00:08:49.511 Could not set queue depth (nvme0n3) 00:08:49.511 Could not set queue depth (nvme0n4) 00:08:49.511 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.511 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.511 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.511 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.511 fio-3.35 00:08:49.511 Starting 4 threads 00:08:50.889 00:08:50.889 job0: (groupid=0, jobs=1): err= 0: pid=3125857: Mon Dec 9 11:47:58 2024 00:08:50.889 read: IOPS=4056, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1001msec) 00:08:50.889 slat (nsec): min=6004, max=16512, avg=6891.89, stdev=691.01 00:08:50.889 clat (usec): min=64, max=211, avg=117.58, stdev=16.51 00:08:50.889 lat (usec): min=71, max=217, avg=124.48, stdev=16.52 00:08:50.889 clat percentiles (usec): 00:08:50.889 | 1.00th=[ 75], 5.00th=[ 93], 10.00th=[ 104], 20.00th=[ 110], 00:08:50.889 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:08:50.889 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 139], 95.00th=[ 153], 00:08:50.889 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 204], 99.95th=[ 204], 00:08:50.889 | 99.99th=[ 212] 00:08:50.889 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:08:50.889 slat (nsec): min=7659, max=65998, avg=8605.08, stdev=1243.09 00:08:50.889 clat (usec): min=55, max=205, avg=108.42, stdev=15.49 00:08:50.889 lat (usec): min=69, max=213, avg=117.02, stdev=15.52 00:08:50.889 clat percentiles (usec): 00:08:50.889 | 1.00th=[ 68], 5.00th=[ 79], 10.00th=[ 95], 20.00th=[ 101], 00:08:50.889 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:08:50.889 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 125], 95.00th=[ 139], 00:08:50.889 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 188], 99.95th=[ 198], 00:08:50.889 | 99.99th=[ 206] 00:08:50.889 bw ( KiB/s): min=16384, max=16384, per=26.54%, avg=16384.00, stdev= 0.00, samples=1 00:08:50.889 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:50.889 lat (usec) : 100=12.85%, 250=87.15% 00:08:50.889 cpu : usr=3.50%, sys=9.70%, ctx=8158, majf=0, minf=1 00:08:50.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.889 issued rwts: total=4061,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.889 job1: (groupid=0, jobs=1): err= 0: pid=3125858: Mon Dec 9 11:47:58 2024 00:08:50.889 read: IOPS=4056, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1001msec) 00:08:50.889 slat (nsec): min=5979, max=26335, avg=6939.93, stdev=731.48 00:08:50.889 clat (usec): min=63, max=219, avg=117.60, stdev=16.05 00:08:50.889 lat (usec): min=71, max=226, avg=124.54, stdev=16.04 00:08:50.889 clat percentiles (usec): 00:08:50.889 | 1.00th=[ 76], 5.00th=[ 96], 10.00th=[ 104], 20.00th=[ 110], 00:08:50.889 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:08:50.889 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 141], 95.00th=[ 153], 00:08:50.889 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 194], 99.95th=[ 198], 00:08:50.889 | 99.99th=[ 221] 00:08:50.889 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:08:50.889 slat (nsec): min=7618, max=39072, avg=8683.79, stdev=1063.84 00:08:50.889 clat (usec): min=58, max=176, avg=108.26, stdev=15.82 00:08:50.889 lat (usec): min=67, max=185, avg=116.94, stdev=15.81 00:08:50.889 clat percentiles (usec): 00:08:50.889 | 1.00th=[ 69], 5.00th=[ 79], 10.00th=[ 94], 20.00th=[ 100], 00:08:50.889 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:08:50.889 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 129], 95.00th=[ 141], 00:08:50.889 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 172], 00:08:50.889 | 99.99th=[ 178] 00:08:50.889 bw ( KiB/s): min=16384, max=16384, per=26.54%, avg=16384.00, stdev= 0.00, samples=1 00:08:50.889 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:50.889 lat (usec) : 100=13.36%, 250=86.64% 00:08:50.889 cpu : usr=4.70%, sys=8.50%, ctx=8158, majf=0, minf=2 00:08:50.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.889 issued rwts: total=4061,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.889 job2: (groupid=0, jobs=1): err= 0: pid=3125859: Mon Dec 9 11:47:58 2024 00:08:50.889 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:08:50.889 slat (nsec): min=6246, max=31283, avg=9061.81, stdev=1821.98 00:08:50.889 clat (usec): min=75, max=202, avg=126.68, stdev=18.61 00:08:50.889 lat (usec): min=84, max=209, avg=135.74, stdev=18.81 00:08:50.889 clat percentiles (usec): 00:08:50.889 | 1.00th=[ 82], 5.00th=[ 88], 10.00th=[ 93], 20.00th=[ 121], 00:08:50.889 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:08:50.889 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 149], 00:08:50.889 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 196], 99.95th=[ 198], 00:08:50.889 | 99.99th=[ 202] 00:08:50.889 write: IOPS=3670, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1001msec); 0 zone resets 00:08:50.889 slat (nsec): min=7914, max=35891, avg=10714.09, stdev=2157.23 00:08:50.889 clat (usec): min=73, max=188, avg=124.09, stdev=17.23 00:08:50.889 lat (usec): min=82, max=197, avg=134.80, stdev=17.38 00:08:50.889 clat percentiles (usec): 00:08:50.889 | 1.00th=[ 80], 5.00th=[ 86], 10.00th=[ 102], 20.00th=[ 119], 00:08:50.889 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 127], 00:08:50.889 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 159], 00:08:50.889 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 182], 99.95th=[ 182], 00:08:50.889 | 99.99th=[ 188] 00:08:50.889 bw ( KiB/s): min=16384, max=16384, per=26.54%, avg=16384.00, stdev= 0.00, samples=1 00:08:50.889 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:50.889 lat (usec) : 100=11.85%, 250=88.15% 00:08:50.889 cpu : usr=6.00%, sys=8.80%, ctx=7258, majf=0, minf=1 00:08:50.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.890 issued rwts: total=3584,3674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.890 job3: (groupid=0, jobs=1): err= 0: pid=3125860: Mon Dec 9 11:47:58 2024 00:08:50.890 read: IOPS=3441, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1001msec) 00:08:50.890 slat (nsec): min=6434, max=30118, avg=9289.87, stdev=2199.92 00:08:50.890 clat (usec): min=81, max=208, avg=133.00, stdev=15.51 00:08:50.890 lat (usec): min=88, max=228, avg=142.29, stdev=15.83 00:08:50.890 clat percentiles (usec): 00:08:50.890 | 1.00th=[ 91], 5.00th=[ 117], 10.00th=[ 122], 20.00th=[ 125], 00:08:50.890 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:08:50.890 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 165], 00:08:50.890 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 206], 99.95th=[ 208], 00:08:50.890 | 99.99th=[ 208] 00:08:50.890 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:50.890 slat (nsec): min=8076, max=36268, avg=10980.38, stdev=1967.83 00:08:50.890 clat (usec): min=76, max=213, avg=125.97, stdev=15.95 00:08:50.890 lat (usec): min=87, max=222, avg=136.95, stdev=15.83 00:08:50.890 clat percentiles (usec): 00:08:50.890 | 1.00th=[ 84], 5.00th=[ 94], 10.00th=[ 115], 20.00th=[ 119], 00:08:50.890 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 127], 00:08:50.890 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 143], 95.00th=[ 159], 00:08:50.890 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 206], 99.95th=[ 212], 00:08:50.890 | 99.99th=[ 215] 00:08:50.890 bw ( KiB/s): min=15744, max=15744, per=25.50%, avg=15744.00, stdev= 0.00, samples=1 00:08:50.890 iops : min= 3936, max= 3936, avg=3936.00, stdev= 0.00, samples=1 00:08:50.890 lat (usec) : 100=4.99%, 250=95.01% 00:08:50.890 cpu : usr=3.80%, sys=10.30%, ctx=7029, majf=0, minf=2 00:08:50.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.890 issued rwts: total=3445,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.890 00:08:50.890 Run status group 0 (all jobs): 00:08:50.890 READ: bw=59.1MiB/s (62.0MB/s), 13.4MiB/s-15.8MiB/s (14.1MB/s-16.6MB/s), io=59.2MiB (62.1MB), run=1001-1001msec 00:08:50.890 WRITE: bw=60.3MiB/s (63.2MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=60.4MiB (63.3MB), run=1001-1001msec 00:08:50.890 00:08:50.890 Disk stats (read/write): 00:08:50.890 nvme0n1: ios=3484/3584, merge=0/0, ticks=400/374, in_queue=774, util=86.87% 00:08:50.890 nvme0n2: ios=3436/3584, merge=0/0, ticks=397/363, in_queue=760, util=87.42% 00:08:50.890 nvme0n3: ios=3072/3210, merge=0/0, ticks=366/369, in_queue=735, util=89.23% 00:08:50.890 nvme0n4: ios=2983/3072, merge=0/0, ticks=372/361, in_queue=733, util=89.79% 00:08:50.890 11:47:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:50.890 [global] 00:08:50.890 thread=1 00:08:50.890 invalidate=1 00:08:50.890 rw=write 00:08:50.890 time_based=1 00:08:50.890 runtime=1 00:08:50.890 ioengine=libaio 00:08:50.890 direct=1 00:08:50.890 bs=4096 00:08:50.890 iodepth=128 00:08:50.890 norandommap=0 00:08:50.890 numjobs=1 00:08:50.890 00:08:50.890 verify_dump=1 00:08:50.890 verify_backlog=512 00:08:50.890 verify_state_save=0 00:08:50.890 do_verify=1 00:08:50.890 verify=crc32c-intel 00:08:50.890 [job0] 00:08:50.890 filename=/dev/nvme0n1 00:08:50.890 [job1] 00:08:50.890 filename=/dev/nvme0n2 00:08:50.890 [job2] 00:08:50.890 filename=/dev/nvme0n3 00:08:50.890 [job3] 00:08:50.890 filename=/dev/nvme0n4 00:08:50.890 Could not set queue depth (nvme0n1) 00:08:50.890 Could not set queue depth (nvme0n2) 00:08:50.890 Could not set queue depth (nvme0n3) 00:08:50.890 Could not set queue depth (nvme0n4) 00:08:51.148 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:51.148 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:51.148 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:51.148 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:51.148 fio-3.35 00:08:51.148 Starting 4 threads 00:08:52.548 00:08:52.548 job0: (groupid=0, jobs=1): err= 0: pid=3126234: Mon Dec 9 11:48:00 2024 00:08:52.548 read: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec) 00:08:52.548 slat (nsec): min=1438, max=1372.1k, avg=68352.44, stdev=219451.07 00:08:52.548 clat (usec): min=5793, max=12889, avg=8840.86, stdev=2132.77 00:08:52.548 lat (usec): min=6779, max=13133, avg=8909.21, stdev=2142.12 00:08:52.548 clat percentiles (usec): 00:08:52.548 | 1.00th=[ 6325], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7177], 00:08:52.548 | 30.00th=[ 7308], 40.00th=[ 7373], 50.00th=[ 7439], 60.00th=[ 7570], 00:08:52.548 | 70.00th=[11338], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:08:52.548 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12780], 99.95th=[12780], 00:08:52.548 | 99.99th=[12911] 00:08:52.548 write: IOPS=7588, BW=29.6MiB/s (31.1MB/s)(29.7MiB/1002msec); 0 zone resets 00:08:52.548 slat (nsec): min=1984, max=1401.9k, avg=64638.99, stdev=205491.81 00:08:52.548 clat (usec): min=796, max=12492, avg=8340.60, stdev=2020.91 00:08:52.548 lat (usec): min=1806, max=12495, avg=8405.24, stdev=2028.35 00:08:52.548 clat percentiles (usec): 00:08:52.548 | 1.00th=[ 5604], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6849], 00:08:52.548 | 30.00th=[ 6915], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7242], 00:08:52.548 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11207], 95.00th=[11338], 00:08:52.548 | 99.00th=[11731], 99.50th=[11731], 99.90th=[12125], 99.95th=[12387], 00:08:52.548 | 99.99th=[12518] 00:08:52.548 bw ( KiB/s): min=24576, max=35240, per=31.73%, avg=29908.00, stdev=7540.59, samples=2 00:08:52.548 iops : min= 6144, max= 8810, avg=7477.00, stdev=1885.15, samples=2 00:08:52.548 lat (usec) : 1000=0.01% 00:08:52.548 lat (msec) : 2=0.07%, 4=0.25%, 10=63.86%, 20=35.82% 00:08:52.548 cpu : usr=3.00%, sys=4.40%, ctx=1557, majf=0, minf=1 00:08:52.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:52.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.548 issued rwts: total=7168,7604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.548 job1: (groupid=0, jobs=1): err= 0: pid=3126235: Mon Dec 9 11:48:00 2024 00:08:52.548 read: IOPS=6323, BW=24.7MiB/s (25.9MB/s)(24.8MiB/1003msec) 00:08:52.548 slat (nsec): min=1496, max=2313.3k, avg=77600.21, stdev=245631.18 00:08:52.548 clat (usec): min=1940, max=17359, avg=9886.07, stdev=2866.63 00:08:52.548 lat (usec): min=3999, max=17362, avg=9963.67, stdev=2883.48 00:08:52.548 clat percentiles (usec): 00:08:52.549 | 1.00th=[ 6456], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7111], 00:08:52.549 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[10683], 60.00th=[11338], 00:08:52.549 | 70.00th=[11731], 80.00th=[11863], 90.00th=[13566], 95.00th=[15795], 00:08:52.549 | 99.00th=[16188], 99.50th=[16319], 99.90th=[17433], 99.95th=[17433], 00:08:52.549 | 99.99th=[17433] 00:08:52.549 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:08:52.549 slat (nsec): min=1974, max=2285.2k, avg=73677.38, stdev=232619.22 00:08:52.549 clat (usec): min=3933, max=16629, avg=9643.90, stdev=2935.05 00:08:52.549 lat (usec): min=3974, max=16632, avg=9717.58, stdev=2953.23 00:08:52.549 clat percentiles (usec): 00:08:52.549 | 1.00th=[ 6259], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6783], 00:08:52.549 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[10290], 60.00th=[10814], 00:08:52.549 | 70.00th=[11076], 80.00th=[11207], 90.00th=[15139], 95.00th=[15795], 00:08:52.549 | 99.00th=[16319], 99.50th=[16581], 99.90th=[16581], 99.95th=[16581], 00:08:52.549 | 99.99th=[16581] 00:08:52.549 bw ( KiB/s): min=24576, max=28672, per=28.25%, avg=26624.00, stdev=2896.31, samples=2 00:08:52.549 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:08:52.549 lat (msec) : 2=0.01%, 4=0.02%, 10=48.38%, 20=51.59% 00:08:52.549 cpu : usr=2.79%, sys=4.19%, ctx=1555, majf=0, minf=1 00:08:52.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:08:52.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.549 issued rwts: total=6342,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.549 job2: (groupid=0, jobs=1): err= 0: pid=3126236: Mon Dec 9 11:48:00 2024 00:08:52.549 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:08:52.549 slat (nsec): min=1619, max=1469.5k, avg=107413.62, stdev=297624.17 00:08:52.549 clat (usec): min=7352, max=18277, avg=13682.03, stdev=3117.69 00:08:52.549 lat (usec): min=7355, max=18734, avg=13789.44, stdev=3129.75 00:08:52.549 clat percentiles (usec): 00:08:52.549 | 1.00th=[ 7832], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9372], 00:08:52.549 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:08:52.549 | 70.00th=[14615], 80.00th=[17171], 90.00th=[17433], 95.00th=[17695], 00:08:52.549 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:08:52.549 | 99.99th=[18220] 00:08:52.549 write: IOPS=4988, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1002msec); 0 zone resets 00:08:52.549 slat (usec): min=2, max=1649, avg=98.31, stdev=274.37 00:08:52.549 clat (usec): min=1000, max=18202, avg=12720.21, stdev=3258.96 00:08:52.549 lat (usec): min=1827, max=18206, avg=12818.52, stdev=3272.10 00:08:52.549 clat percentiles (usec): 00:08:52.549 | 1.00th=[ 4621], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8586], 00:08:52.549 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:08:52.549 | 70.00th=[13829], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:08:52.549 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:08:52.549 | 99.99th=[18220] 00:08:52.549 bw ( KiB/s): min=19416, max=19552, per=20.67%, avg=19484.00, stdev=96.17, samples=2 00:08:52.549 iops : min= 4854, max= 4888, avg=4871.00, stdev=24.04, samples=2 00:08:52.549 lat (msec) : 2=0.22%, 4=0.25%, 10=25.54%, 20=74.00% 00:08:52.549 cpu : usr=2.00%, sys=3.70%, ctx=1213, majf=0, minf=1 00:08:52.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:52.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.549 issued rwts: total=4608,4998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.549 job3: (groupid=0, jobs=1): err= 0: pid=3126237: Mon Dec 9 11:48:00 2024 00:08:52.549 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:08:52.549 slat (nsec): min=1337, max=2461.0k, avg=119927.24, stdev=324153.88 00:08:52.549 clat (usec): min=11169, max=18258, avg=15370.65, stdev=1477.65 00:08:52.549 lat (usec): min=11175, max=18262, avg=15490.57, stdev=1455.24 00:08:52.549 clat percentiles (usec): 00:08:52.549 | 1.00th=[13042], 5.00th=[13566], 10.00th=[13829], 20.00th=[14091], 00:08:52.549 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14615], 60.00th=[15533], 00:08:52.549 | 70.00th=[16450], 80.00th=[17171], 90.00th=[17695], 95.00th=[17695], 00:08:52.549 | 99.00th=[17957], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:08:52.549 | 99.99th=[18220] 00:08:52.549 write: IOPS=4363, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1003msec); 0 zone resets 00:08:52.549 slat (nsec): min=1968, max=1802.3k, avg=113478.90, stdev=306910.35 00:08:52.549 clat (usec): min=1136, max=17518, avg=14561.62, stdev=1911.64 00:08:52.549 lat (usec): min=2816, max=17522, avg=14675.09, stdev=1895.51 00:08:52.549 clat percentiles (usec): 00:08:52.549 | 1.00th=[ 6652], 5.00th=[12649], 10.00th=[13042], 20.00th=[13304], 00:08:52.549 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13829], 60.00th=[15401], 00:08:52.549 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16909], 95.00th=[17171], 00:08:52.549 | 99.00th=[17433], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:08:52.549 | 99.99th=[17433] 00:08:52.549 bw ( KiB/s): min=14600, max=19392, per=18.03%, avg=16996.00, stdev=3388.46, samples=2 00:08:52.549 iops : min= 3650, max= 4848, avg=4249.00, stdev=847.11, samples=2 00:08:52.549 lat (msec) : 2=0.01%, 4=0.32%, 10=0.44%, 20=99.23% 00:08:52.549 cpu : usr=1.40%, sys=3.79%, ctx=1219, majf=0, minf=1 00:08:52.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:52.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.549 issued rwts: total=4096,4377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.549 00:08:52.549 Run status group 0 (all jobs): 00:08:52.549 READ: bw=86.5MiB/s (90.7MB/s), 16.0MiB/s-27.9MiB/s (16.7MB/s-29.3MB/s), io=86.8MiB (91.0MB), run=1002-1003msec 00:08:52.549 WRITE: bw=92.0MiB/s (96.5MB/s), 17.0MiB/s-29.6MiB/s (17.9MB/s-31.1MB/s), io=92.3MiB (96.8MB), run=1002-1003msec 00:08:52.549 00:08:52.549 Disk stats (read/write): 00:08:52.549 nvme0n1: ios=6091/6144, merge=0/0, ticks=18176/17430, in_queue=35606, util=86.86% 00:08:52.549 nvme0n2: ios=5696/6144, merge=0/0, ticks=15570/16245, in_queue=31815, util=87.31% 00:08:52.549 nvme0n3: ios=3584/3933, merge=0/0, ticks=13536/13669, in_queue=27205, util=89.22% 00:08:52.549 nvme0n4: ios=3584/3709, merge=0/0, ticks=13838/13413, in_queue=27251, util=89.78% 00:08:52.549 11:48:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:52.549 [global] 00:08:52.549 thread=1 00:08:52.549 invalidate=1 00:08:52.549 rw=randwrite 00:08:52.549 time_based=1 00:08:52.549 runtime=1 00:08:52.549 ioengine=libaio 00:08:52.549 direct=1 00:08:52.549 bs=4096 00:08:52.549 iodepth=128 00:08:52.549 norandommap=0 00:08:52.549 numjobs=1 00:08:52.549 00:08:52.549 verify_dump=1 00:08:52.549 verify_backlog=512 00:08:52.549 verify_state_save=0 00:08:52.549 do_verify=1 00:08:52.549 verify=crc32c-intel 00:08:52.549 [job0] 00:08:52.549 filename=/dev/nvme0n1 00:08:52.549 [job1] 00:08:52.549 filename=/dev/nvme0n2 00:08:52.549 [job2] 00:08:52.549 filename=/dev/nvme0n3 00:08:52.549 [job3] 00:08:52.549 filename=/dev/nvme0n4 00:08:52.549 Could not set queue depth (nvme0n1) 00:08:52.549 Could not set queue depth (nvme0n2) 00:08:52.549 Could not set queue depth (nvme0n3) 00:08:52.549 Could not set queue depth (nvme0n4) 00:08:52.549 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.549 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.549 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.549 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.549 fio-3.35 00:08:52.549 Starting 4 threads 00:08:53.925 00:08:53.925 job0: (groupid=0, jobs=1): err= 0: pid=3126642: Mon Dec 9 11:48:01 2024 00:08:53.925 read: IOPS=5082, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:08:53.925 slat (nsec): min=1586, max=1643.2k, avg=99618.20, stdev=260268.45 00:08:53.925 clat (usec): min=5147, max=18104, avg=12816.29, stdev=698.56 00:08:53.925 lat (usec): min=5730, max=18110, avg=12915.91, stdev=676.16 00:08:53.925 clat percentiles (usec): 00:08:53.925 | 1.00th=[11338], 5.00th=[11994], 10.00th=[12256], 20.00th=[12518], 00:08:53.925 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:08:53.925 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13173], 95.00th=[13304], 00:08:53.925 | 99.00th=[13960], 99.50th=[16450], 99.90th=[17957], 99.95th=[17957], 00:08:53.925 | 99.99th=[18220] 00:08:53.925 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:08:53.925 slat (usec): min=2, max=1631, avg=93.46, stdev=244.99 00:08:53.925 clat (usec): min=6635, max=13424, avg=12085.53, stdev=460.90 00:08:53.925 lat (usec): min=6706, max=13789, avg=12178.98, stdev=435.21 00:08:53.925 clat percentiles (usec): 00:08:53.925 | 1.00th=[10945], 5.00th=[11338], 10.00th=[11469], 20.00th=[11863], 00:08:53.925 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:08:53.925 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12518], 00:08:53.925 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13173], 99.95th=[13304], 00:08:53.925 | 99.99th=[13435] 00:08:53.925 bw ( KiB/s): min=20480, max=20480, per=25.18%, avg=20480.00, stdev= 0.00, samples=2 00:08:53.925 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:08:53.925 lat (msec) : 10=0.74%, 20=99.26% 00:08:53.925 cpu : usr=1.99%, sys=3.38%, ctx=1557, majf=0, minf=1 00:08:53.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:53.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.925 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.925 job1: (groupid=0, jobs=1): err= 0: pid=3126643: Mon Dec 9 11:48:01 2024 00:08:53.925 read: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(19.9MiB/1006msec) 00:08:53.925 slat (nsec): min=1520, max=1702.5k, avg=99935.71, stdev=265915.14 00:08:53.925 clat (usec): min=5487, max=18428, avg=12841.86, stdev=685.04 00:08:53.925 lat (usec): min=6317, max=18433, avg=12941.80, stdev=662.72 00:08:53.925 clat percentiles (usec): 00:08:53.925 | 1.00th=[11076], 5.00th=[11994], 10.00th=[12256], 20.00th=[12649], 00:08:53.925 | 30.00th=[12780], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:08:53.925 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13173], 95.00th=[13304], 00:08:53.925 | 99.00th=[14353], 99.50th=[16909], 99.90th=[17695], 99.95th=[18482], 00:08:53.925 | 99.99th=[18482] 00:08:53.925 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:08:53.925 slat (nsec): min=1990, max=1627.5k, avg=93439.80, stdev=252572.06 00:08:53.925 clat (usec): min=7202, max=14026, avg=12108.05, stdev=475.95 00:08:53.925 lat (usec): min=7210, max=14063, avg=12201.49, stdev=461.38 00:08:53.925 clat percentiles (usec): 00:08:53.925 | 1.00th=[10683], 5.00th=[11338], 10.00th=[11600], 20.00th=[11863], 00:08:53.925 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:08:53.925 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12518], 00:08:53.925 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13435], 99.95th=[13698], 00:08:53.925 | 99.99th=[14091] 00:08:53.925 bw ( KiB/s): min=20480, max=20480, per=25.18%, avg=20480.00, stdev= 0.00, samples=2 00:08:53.925 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:08:53.925 lat (msec) : 10=0.72%, 20=99.28% 00:08:53.925 cpu : usr=1.89%, sys=3.38%, ctx=1478, majf=0, minf=1 00:08:53.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:53.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.925 issued rwts: total=5094,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.925 job2: (groupid=0, jobs=1): err= 0: pid=3126644: Mon Dec 9 11:48:01 2024 00:08:53.925 read: IOPS=5083, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:08:53.925 slat (nsec): min=1578, max=1640.8k, avg=99474.88, stdev=258732.91 00:08:53.925 clat (usec): min=5384, max=18374, avg=12805.60, stdev=721.34 00:08:53.925 lat (usec): min=6205, max=18379, avg=12905.08, stdev=705.62 00:08:53.925 clat percentiles (usec): 00:08:53.925 | 1.00th=[ 9503], 5.00th=[11994], 10.00th=[12256], 20.00th=[12518], 00:08:53.925 | 30.00th=[12780], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:08:53.925 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13173], 95.00th=[13304], 00:08:53.925 | 99.00th=[13960], 99.50th=[15270], 99.90th=[17695], 99.95th=[18482], 00:08:53.925 | 99.99th=[18482] 00:08:53.925 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:08:53.925 slat (usec): min=2, max=1640, avg=93.52, stdev=246.66 00:08:53.925 clat (usec): min=7192, max=13866, avg=12088.22, stdev=459.73 00:08:53.925 lat (usec): min=7202, max=13870, avg=12181.74, stdev=436.04 00:08:53.925 clat percentiles (usec): 00:08:53.925 | 1.00th=[11076], 5.00th=[11338], 10.00th=[11600], 20.00th=[11863], 00:08:53.925 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:08:53.925 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12518], 00:08:53.925 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13173], 99.95th=[13435], 00:08:53.925 | 99.99th=[13829] 00:08:53.925 bw ( KiB/s): min=20480, max=20480, per=25.18%, avg=20480.00, stdev= 0.00, samples=2 00:08:53.925 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:08:53.925 lat (msec) : 10=0.84%, 20=99.16% 00:08:53.925 cpu : usr=1.49%, sys=3.78%, ctx=1454, majf=0, minf=1 00:08:53.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:53.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.925 issued rwts: total=5114,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.925 job3: (groupid=0, jobs=1): err= 0: pid=3126645: Mon Dec 9 11:48:01 2024 00:08:53.925 read: IOPS=5033, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1007msec) 00:08:53.925 slat (nsec): min=1657, max=1697.0k, avg=99767.09, stdev=260512.11 00:08:53.925 clat (usec): min=5685, max=18126, avg=12875.28, stdev=695.53 00:08:53.925 lat (usec): min=5693, max=18807, avg=12975.05, stdev=696.87 00:08:53.925 clat percentiles (usec): 00:08:53.925 | 1.00th=[ 9896], 5.00th=[12125], 10.00th=[12387], 20.00th=[12649], 00:08:53.925 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13042], 00:08:53.925 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13173], 95.00th=[13304], 00:08:53.925 | 99.00th=[13960], 99.50th=[15664], 99.90th=[17957], 99.95th=[18220], 00:08:53.925 | 99.99th=[18220] 00:08:53.925 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:08:53.925 slat (usec): min=2, max=1564, avg=93.59, stdev=243.05 00:08:53.925 clat (usec): min=7397, max=13372, avg=12134.39, stdev=429.61 00:08:53.925 lat (usec): min=7403, max=14100, avg=12227.98, stdev=427.49 00:08:53.925 clat percentiles (usec): 00:08:53.925 | 1.00th=[11076], 5.00th=[11338], 10.00th=[11731], 20.00th=[11994], 00:08:53.925 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:08:53.925 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12518], 00:08:53.925 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13173], 99.95th=[13304], 00:08:53.925 | 99.99th=[13435] 00:08:53.925 bw ( KiB/s): min=20480, max=20480, per=25.18%, avg=20480.00, stdev= 0.00, samples=2 00:08:53.925 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:08:53.925 lat (msec) : 10=0.80%, 20=99.20% 00:08:53.925 cpu : usr=2.29%, sys=4.08%, ctx=1404, majf=0, minf=1 00:08:53.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:53.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.925 issued rwts: total=5069,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.925 00:08:53.925 Run status group 0 (all jobs): 00:08:53.925 READ: bw=79.1MiB/s (82.9MB/s), 19.7MiB/s-19.9MiB/s (20.6MB/s-20.8MB/s), io=79.6MiB (83.5MB), run=1006-1007msec 00:08:53.925 WRITE: bw=79.4MiB/s (83.3MB/s), 19.9MiB/s-19.9MiB/s (20.8MB/s-20.8MB/s), io=80.0MiB (83.9MB), run=1006-1007msec 00:08:53.925 00:08:53.925 Disk stats (read/write): 00:08:53.925 nvme0n1: ios=4191/4608, merge=0/0, ticks=26265/27349, in_queue=53614, util=86.77% 00:08:53.925 nvme0n2: ios=4130/4608, merge=0/0, ticks=26202/27410, in_queue=53612, util=87.22% 00:08:53.925 nvme0n3: ios=4136/4608, merge=0/0, ticks=26211/27379, in_queue=53590, util=89.12% 00:08:53.925 nvme0n4: ios=4096/4604, merge=0/0, ticks=26102/27428, in_queue=53530, util=89.77% 00:08:53.925 11:48:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:53.925 11:48:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3126945 00:08:53.925 11:48:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:53.925 11:48:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:53.925 [global] 00:08:53.925 thread=1 00:08:53.925 invalidate=1 00:08:53.925 rw=read 00:08:53.925 time_based=1 00:08:53.925 runtime=10 00:08:53.925 ioengine=libaio 00:08:53.925 direct=1 00:08:53.925 bs=4096 00:08:53.925 iodepth=1 00:08:53.925 norandommap=1 00:08:53.925 numjobs=1 00:08:53.925 00:08:53.925 [job0] 00:08:53.925 filename=/dev/nvme0n1 00:08:53.925 [job1] 00:08:53.925 filename=/dev/nvme0n2 00:08:53.925 [job2] 00:08:53.925 filename=/dev/nvme0n3 00:08:53.925 [job3] 00:08:53.925 filename=/dev/nvme0n4 00:08:53.925 Could not set queue depth (nvme0n1) 00:08:53.925 Could not set queue depth (nvme0n2) 00:08:53.925 Could not set queue depth (nvme0n3) 00:08:53.925 Could not set queue depth (nvme0n4) 00:08:54.183 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.183 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.183 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.183 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.183 fio-3.35 00:08:54.183 Starting 4 threads 00:08:57.469 11:48:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:57.469 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=73818112, buflen=4096 00:08:57.469 fio: pid=3127112, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:57.469 11:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:57.469 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=74272768, buflen=4096 00:08:57.469 fio: pid=3127111, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:57.469 11:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:57.469 11:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:57.469 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=16273408, buflen=4096 00:08:57.469 fio: pid=3127101, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:57.469 11:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:57.469 11:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:57.729 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45588480, buflen=4096 00:08:57.729 fio: pid=3127108, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:57.729 11:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:57.729 11:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:57.729 00:08:57.729 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3127101: Mon Dec 9 11:48:05 2024 00:08:57.729 read: IOPS=6541, BW=25.6MiB/s (26.8MB/s)(79.5MiB/3112msec) 00:08:57.729 slat (usec): min=5, max=15886, avg=10.43, stdev=186.41 00:08:57.729 clat (usec): min=49, max=22830, avg=140.11, stdev=225.63 00:08:57.729 lat (usec): min=56, max=22837, avg=150.54, stdev=292.65 00:08:57.729 clat percentiles (usec): 00:08:57.729 | 1.00th=[ 58], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 90], 00:08:57.729 | 30.00th=[ 121], 40.00th=[ 135], 50.00th=[ 147], 60.00th=[ 157], 00:08:57.729 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:08:57.729 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 247], 99.95th=[ 253], 00:08:57.729 | 99.99th=[ 1029] 00:08:57.729 bw ( KiB/s): min=22720, max=32136, per=26.17%, avg=26252.67, stdev=3606.08, samples=6 00:08:57.729 iops : min= 5680, max= 8034, avg=6563.17, stdev=901.52, samples=6 00:08:57.729 lat (usec) : 50=0.01%, 100=24.73%, 250=75.18%, 500=0.05%, 750=0.01% 00:08:57.729 lat (msec) : 2=0.01%, 50=0.01% 00:08:57.729 cpu : usr=1.90%, sys=7.23%, ctx=20363, majf=0, minf=1 00:08:57.729 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:57.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.729 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.729 issued rwts: total=20358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.729 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:57.729 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3127108: Mon Dec 9 11:48:05 2024 00:08:57.729 read: IOPS=8213, BW=32.1MiB/s (33.6MB/s)(107MiB/3350msec) 00:08:57.729 slat (usec): min=2, max=17908, avg=10.48, stdev=225.13 00:08:57.729 clat (usec): min=44, max=289, avg=109.93, stdev=44.81 00:08:57.729 lat (usec): min=47, max=17984, avg=120.41, stdev=229.19 00:08:57.729 clat percentiles (usec): 00:08:57.729 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 73], 00:08:57.729 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 88], 60.00th=[ 128], 00:08:57.729 | 70.00th=[ 145], 80.00th=[ 159], 90.00th=[ 176], 95.00th=[ 182], 00:08:57.729 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 227], 99.95th=[ 235], 00:08:57.729 | 99.99th=[ 251] 00:08:57.729 bw ( KiB/s): min=23680, max=44159, per=30.83%, avg=30929.17, stdev=8697.06, samples=6 00:08:57.729 iops : min= 5920, max=11039, avg=7732.17, stdev=2174.04, samples=6 00:08:57.729 lat (usec) : 50=0.12%, 100=54.72%, 250=45.16%, 500=0.01% 00:08:57.729 cpu : usr=2.87%, sys=8.75%, ctx=27521, majf=0, minf=2 00:08:57.729 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:57.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.729 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.729 issued rwts: total=27515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.729 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:57.729 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3127111: Mon Dec 9 11:48:05 2024 00:08:57.729 read: IOPS=6270, BW=24.5MiB/s (25.7MB/s)(70.8MiB/2892msec) 00:08:57.729 slat (usec): min=6, max=13887, avg= 9.10, stdev=135.39 00:08:57.729 clat (usec): min=69, max=22910, avg=148.04, stdev=172.41 00:08:57.729 lat (usec): min=77, max=22918, avg=157.14, stdev=218.89 00:08:57.729 clat percentiles (usec): 00:08:57.729 | 1.00th=[ 82], 5.00th=[ 91], 10.00th=[ 96], 20.00th=[ 115], 00:08:57.729 | 30.00th=[ 131], 40.00th=[ 141], 50.00th=[ 153], 60.00th=[ 161], 00:08:57.729 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 194], 00:08:57.729 | 99.00th=[ 217], 99.50th=[ 227], 99.90th=[ 241], 99.95th=[ 247], 00:08:57.729 | 99.99th=[ 1004] 00:08:57.729 bw ( KiB/s): min=23136, max=30504, per=25.38%, avg=25462.40, stdev=2914.28, samples=5 00:08:57.729 iops : min= 5784, max= 7626, avg=6365.60, stdev=728.57, samples=5 00:08:57.729 lat (usec) : 100=13.91%, 250=86.04%, 500=0.02%, 750=0.01% 00:08:57.729 lat (msec) : 2=0.01%, 50=0.01% 00:08:57.729 cpu : usr=2.32%, sys=6.99%, ctx=18136, majf=0, minf=2 00:08:57.729 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:57.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.729 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.729 issued rwts: total=18134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.729 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:57.729 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3127112: Mon Dec 9 11:48:05 2024 00:08:57.729 read: IOPS=6695, BW=26.2MiB/s (27.4MB/s)(70.4MiB/2692msec) 00:08:57.729 slat (nsec): min=6185, max=34373, avg=7243.24, stdev=1072.77 00:08:57.729 clat (usec): min=71, max=266, avg=139.87, stdev=35.95 00:08:57.729 lat (usec): min=78, max=273, avg=147.11, stdev=36.01 00:08:57.729 clat percentiles (usec): 00:08:57.729 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 96], 00:08:57.729 | 30.00th=[ 125], 40.00th=[ 133], 50.00th=[ 147], 60.00th=[ 155], 00:08:57.729 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:08:57.729 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 237], 99.95th=[ 241], 00:08:57.729 | 99.99th=[ 251] 00:08:57.729 bw ( KiB/s): min=23064, max=35384, per=27.27%, avg=27358.40, stdev=4914.44, samples=5 00:08:57.729 iops : min= 5766, max= 8846, avg=6839.60, stdev=1228.61, samples=5 00:08:57.729 lat (usec) : 100=23.10%, 250=76.89%, 500=0.01% 00:08:57.729 cpu : usr=1.86%, sys=8.03%, ctx=18023, majf=0, minf=1 00:08:57.729 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:57.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.729 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.729 issued rwts: total=18023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.729 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:57.729 00:08:57.729 Run status group 0 (all jobs): 00:08:57.729 READ: bw=98.0MiB/s (103MB/s), 24.5MiB/s-32.1MiB/s (25.7MB/s-33.6MB/s), io=328MiB (344MB), run=2692-3350msec 00:08:57.729 00:08:57.729 Disk stats (read/write): 00:08:57.729 nvme0n1: ios=20257/0, merge=0/0, ticks=2715/0, in_queue=2715, util=93.47% 00:08:57.729 nvme0n2: ios=27360/0, merge=0/0, ticks=2827/0, in_queue=2827, util=93.44% 00:08:57.729 nvme0n3: ios=17843/0, merge=0/0, ticks=2501/0, in_queue=2501, util=95.51% 00:08:57.729 nvme0n4: ios=17534/0, merge=0/0, ticks=2301/0, in_queue=2301, util=96.42% 00:08:57.989 11:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:57.989 11:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:58.248 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:58.248 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:58.508 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:58.508 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:58.508 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:58.508 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:58.767 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:58.767 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3126945 00:08:58.767 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:58.767 11:48:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:59.702 nvmf hotplug test: fio failed as expected 00:08:59.702 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:59.961 rmmod nvme_rdma 00:08:59.961 rmmod nvme_fabrics 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3124067 ']' 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3124067 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3124067 ']' 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3124067 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.961 11:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3124067 00:09:00.220 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.220 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.220 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3124067' 00:09:00.220 killing process with pid 3124067 00:09:00.220 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3124067 00:09:00.220 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3124067 00:09:00.220 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.220 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:00.220 00:09:00.220 real 0m25.581s 00:09:00.220 user 1m51.282s 00:09:00.220 sys 0m8.423s 00:09:00.220 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.220 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.220 ************************************ 00:09:00.220 END TEST nvmf_fio_target 00:09:00.220 ************************************ 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.480 ************************************ 00:09:00.480 START TEST nvmf_bdevio 00:09:00.480 ************************************ 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:00.480 * Looking for test storage... 00:09:00.480 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.480 --rc genhtml_branch_coverage=1 00:09:00.480 --rc genhtml_function_coverage=1 00:09:00.480 --rc genhtml_legend=1 00:09:00.480 --rc geninfo_all_blocks=1 00:09:00.480 --rc geninfo_unexecuted_blocks=1 00:09:00.480 00:09:00.480 ' 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.480 --rc genhtml_branch_coverage=1 00:09:00.480 --rc genhtml_function_coverage=1 00:09:00.480 --rc genhtml_legend=1 00:09:00.480 --rc geninfo_all_blocks=1 00:09:00.480 --rc geninfo_unexecuted_blocks=1 00:09:00.480 00:09:00.480 ' 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.480 --rc genhtml_branch_coverage=1 00:09:00.480 --rc genhtml_function_coverage=1 00:09:00.480 --rc genhtml_legend=1 00:09:00.480 --rc geninfo_all_blocks=1 00:09:00.480 --rc geninfo_unexecuted_blocks=1 00:09:00.480 00:09:00.480 ' 00:09:00.480 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.480 --rc genhtml_branch_coverage=1 00:09:00.480 --rc genhtml_function_coverage=1 00:09:00.481 --rc genhtml_legend=1 00:09:00.481 --rc geninfo_all_blocks=1 00:09:00.481 --rc geninfo_unexecuted_blocks=1 00:09:00.481 00:09:00.481 ' 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.481 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.740 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.741 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:00.741 11:48:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.310 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.310 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.310 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.310 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.310 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:07.311 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:07.311 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:07.311 Found net devices under 0000:da:00.0: mlx_0_0 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:07.311 Found net devices under 0000:da:00.1: mlx_0_1 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:07.311 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:07.311 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:09:07.311 altname enp218s0f0np0 00:09:07.311 altname ens818f0np0 00:09:07.311 inet 192.168.100.8/24 scope global mlx_0_0 00:09:07.311 valid_lft forever preferred_lft forever 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:07.311 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:07.311 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:09:07.311 altname enp218s0f1np1 00:09:07.311 altname ens818f1np1 00:09:07.311 inet 192.168.100.9/24 scope global mlx_0_1 00:09:07.311 valid_lft forever preferred_lft forever 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:07.311 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:07.312 192.168.100.9' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:07.312 192.168.100.9' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:07.312 192.168.100.9' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3131741 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3131741 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3131741 ']' 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.312 [2024-12-09 11:48:14.509220] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:09:07.312 [2024-12-09 11:48:14.509264] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.312 [2024-12-09 11:48:14.584214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.312 [2024-12-09 11:48:14.623206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.312 [2024-12-09 11:48:14.623245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.312 [2024-12-09 11:48:14.623252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.312 [2024-12-09 11:48:14.623257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.312 [2024-12-09 11:48:14.623262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.312 [2024-12-09 11:48:14.624718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:07.312 [2024-12-09 11:48:14.624864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:07.312 [2024-12-09 11:48:14.624952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.312 [2024-12-09 11:48:14.624953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.312 [2024-12-09 11:48:14.797692] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17cc240/0x17d0730) succeed. 00:09:07.312 [2024-12-09 11:48:14.809025] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17cd8d0/0x1811dd0) succeed. 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.312 Malloc0 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.312 [2024-12-09 11:48:14.988954] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.312 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.312 { 00:09:07.312 "params": { 00:09:07.312 "name": "Nvme$subsystem", 00:09:07.312 "trtype": "$TEST_TRANSPORT", 00:09:07.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.312 "adrfam": "ipv4", 00:09:07.312 "trsvcid": "$NVMF_PORT", 00:09:07.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.313 "hdgst": ${hdgst:-false}, 00:09:07.313 "ddgst": ${ddgst:-false} 00:09:07.313 }, 00:09:07.313 "method": "bdev_nvme_attach_controller" 00:09:07.313 } 00:09:07.313 EOF 00:09:07.313 )") 00:09:07.313 11:48:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:07.313 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:07.313 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:07.313 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.313 "params": { 00:09:07.313 "name": "Nvme1", 00:09:07.313 "trtype": "rdma", 00:09:07.313 "traddr": "192.168.100.8", 00:09:07.313 "adrfam": "ipv4", 00:09:07.313 "trsvcid": "4420", 00:09:07.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.313 "hdgst": false, 00:09:07.313 "ddgst": false 00:09:07.313 }, 00:09:07.313 "method": "bdev_nvme_attach_controller" 00:09:07.313 }' 00:09:07.313 [2024-12-09 11:48:15.036577] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:09:07.313 [2024-12-09 11:48:15.036619] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131766 ] 00:09:07.313 [2024-12-09 11:48:15.113630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:07.313 [2024-12-09 11:48:15.157699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.313 [2024-12-09 11:48:15.157840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.313 [2024-12-09 11:48:15.157841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.313 I/O targets: 00:09:07.313 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:07.313 00:09:07.313 00:09:07.313 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.313 http://cunit.sourceforge.net/ 00:09:07.313 00:09:07.313 00:09:07.313 Suite: bdevio tests on: Nvme1n1 00:09:07.313 Test: blockdev write read block ...passed 00:09:07.313 Test: blockdev write zeroes read block ...passed 00:09:07.313 Test: blockdev write zeroes read no split ...passed 00:09:07.313 Test: blockdev write zeroes read split ...passed 00:09:07.313 Test: blockdev write zeroes read split partial ...passed 00:09:07.571 Test: blockdev reset ...[2024-12-09 11:48:15.364372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:07.571 [2024-12-09 11:48:15.387647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:09:07.571 [2024-12-09 11:48:15.413727] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:07.571 passed 00:09:07.571 Test: blockdev write read 8 blocks ...passed 00:09:07.571 Test: blockdev write read size > 128k ...passed 00:09:07.571 Test: blockdev write read invalid size ...passed 00:09:07.571 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:07.571 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:07.571 Test: blockdev write read max offset ...passed 00:09:07.571 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:07.571 Test: blockdev writev readv 8 blocks ...passed 00:09:07.571 Test: blockdev writev readv 30 x 1block ...passed 00:09:07.571 Test: blockdev writev readv block ...passed 00:09:07.571 Test: blockdev writev readv size > 128k ...passed 00:09:07.571 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:07.572 Test: blockdev comparev and writev ...[2024-12-09 11:48:15.417213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.572 [2024-12-09 11:48:15.417242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.417252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.572 [2024-12-09 11:48:15.417260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.417435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.572 [2024-12-09 11:48:15.417447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.417456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.572 [2024-12-09 11:48:15.417465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.417630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.572 [2024-12-09 11:48:15.417644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.417654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.572 [2024-12-09 11:48:15.417662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.417826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.572 [2024-12-09 11:48:15.417837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.417846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.572 [2024-12-09 11:48:15.417853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:07.572 passed 00:09:07.572 Test: blockdev nvme passthru rw ...passed 00:09:07.572 Test: blockdev nvme passthru vendor specific ...[2024-12-09 11:48:15.418157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:07.572 [2024-12-09 11:48:15.418168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.418219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:07.572 [2024-12-09 11:48:15.418229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.418274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:07.572 [2024-12-09 11:48:15.418284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:07.572 [2024-12-09 11:48:15.418328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:07.572 [2024-12-09 11:48:15.418337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:07.572 passed 00:09:07.572 Test: blockdev nvme admin passthru ...passed 00:09:07.572 Test: blockdev copy ...passed 00:09:07.572 00:09:07.572 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.572 suites 1 1 n/a 0 0 00:09:07.572 tests 23 23 23 0 0 00:09:07.572 asserts 152 152 152 0 n/a 00:09:07.572 00:09:07.572 Elapsed time = 0.174 seconds 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:07.572 rmmod nvme_rdma 00:09:07.572 rmmod nvme_fabrics 00:09:07.572 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3131741 ']' 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3131741 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3131741 ']' 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3131741 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131741 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131741' 00:09:07.831 killing process with pid 3131741 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3131741 00:09:07.831 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3131741 00:09:08.089 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.089 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:08.089 00:09:08.089 real 0m7.598s 00:09:08.089 user 0m7.986s 00:09:08.089 sys 0m4.975s 00:09:08.089 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.089 11:48:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:08.089 ************************************ 00:09:08.089 END TEST nvmf_bdevio 00:09:08.089 ************************************ 00:09:08.089 11:48:15 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:08.089 00:09:08.089 real 3m55.181s 00:09:08.089 user 10m26.814s 00:09:08.089 sys 1m20.628s 00:09:08.089 11:48:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.089 11:48:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.089 ************************************ 00:09:08.089 END TEST nvmf_target_core 00:09:08.089 ************************************ 00:09:08.089 11:48:16 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:08.089 11:48:16 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.089 11:48:16 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.089 11:48:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:08.089 ************************************ 00:09:08.089 START TEST nvmf_target_extra 00:09:08.089 ************************************ 00:09:08.089 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:08.089 * Looking for test storage... 00:09:08.089 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:09:08.089 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:08.089 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:08.089 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:08.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.348 --rc genhtml_branch_coverage=1 00:09:08.348 --rc genhtml_function_coverage=1 00:09:08.348 --rc genhtml_legend=1 00:09:08.348 --rc geninfo_all_blocks=1 00:09:08.348 --rc geninfo_unexecuted_blocks=1 00:09:08.348 00:09:08.348 ' 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:08.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.348 --rc genhtml_branch_coverage=1 00:09:08.348 --rc genhtml_function_coverage=1 00:09:08.348 --rc genhtml_legend=1 00:09:08.348 --rc geninfo_all_blocks=1 00:09:08.348 --rc geninfo_unexecuted_blocks=1 00:09:08.348 00:09:08.348 ' 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:08.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.348 --rc genhtml_branch_coverage=1 00:09:08.348 --rc genhtml_function_coverage=1 00:09:08.348 --rc genhtml_legend=1 00:09:08.348 --rc geninfo_all_blocks=1 00:09:08.348 --rc geninfo_unexecuted_blocks=1 00:09:08.348 00:09:08.348 ' 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:08.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.348 --rc genhtml_branch_coverage=1 00:09:08.348 --rc genhtml_function_coverage=1 00:09:08.348 --rc genhtml_legend=1 00:09:08.348 --rc geninfo_all_blocks=1 00:09:08.348 --rc geninfo_unexecuted_blocks=1 00:09:08.348 00:09:08.348 ' 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:08.348 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.349 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:08.349 ************************************ 00:09:08.349 START TEST nvmf_example 00:09:08.349 ************************************ 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:08.349 * Looking for test storage... 00:09:08.349 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:08.349 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:08.607 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:08.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.608 --rc genhtml_branch_coverage=1 00:09:08.608 --rc genhtml_function_coverage=1 00:09:08.608 --rc genhtml_legend=1 00:09:08.608 --rc geninfo_all_blocks=1 00:09:08.608 --rc geninfo_unexecuted_blocks=1 00:09:08.608 00:09:08.608 ' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:08.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.608 --rc genhtml_branch_coverage=1 00:09:08.608 --rc genhtml_function_coverage=1 00:09:08.608 --rc genhtml_legend=1 00:09:08.608 --rc geninfo_all_blocks=1 00:09:08.608 --rc geninfo_unexecuted_blocks=1 00:09:08.608 00:09:08.608 ' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:08.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.608 --rc genhtml_branch_coverage=1 00:09:08.608 --rc genhtml_function_coverage=1 00:09:08.608 --rc genhtml_legend=1 00:09:08.608 --rc geninfo_all_blocks=1 00:09:08.608 --rc geninfo_unexecuted_blocks=1 00:09:08.608 00:09:08.608 ' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:08.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.608 --rc genhtml_branch_coverage=1 00:09:08.608 --rc genhtml_function_coverage=1 00:09:08.608 --rc genhtml_legend=1 00:09:08.608 --rc geninfo_all_blocks=1 00:09:08.608 --rc geninfo_unexecuted_blocks=1 00:09:08.608 00:09:08.608 ' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.608 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:08.608 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.609 11:48:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:15.174 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:15.174 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:15.174 Found net devices under 0000:da:00.0: mlx_0_0 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:15.174 Found net devices under 0000:da:00.1: mlx_0_1 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:15.174 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:15.175 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:15.175 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:09:15.175 altname enp218s0f0np0 00:09:15.175 altname ens818f0np0 00:09:15.175 inet 192.168.100.8/24 scope global mlx_0_0 00:09:15.175 valid_lft forever preferred_lft forever 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:15.175 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:15.175 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:09:15.175 altname enp218s0f1np1 00:09:15.175 altname ens818f1np1 00:09:15.175 inet 192.168.100.9/24 scope global mlx_0_1 00:09:15.175 valid_lft forever preferred_lft forever 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:15.175 192.168.100.9' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:15.175 192.168.100.9' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:15.175 192.168.100.9' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3135237 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3135237 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3135237 ']' 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.175 11:48:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.434 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:15.693 11:48:23 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:27.911 Initializing NVMe Controllers 00:09:27.911 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:27.911 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:27.911 Initialization complete. Launching workers. 00:09:27.911 ======================================================== 00:09:27.911 Latency(us) 00:09:27.911 Device Information : IOPS MiB/s Average min max 00:09:27.911 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24421.70 95.40 2619.14 643.90 15973.07 00:09:27.911 ======================================================== 00:09:27.911 Total : 24421.70 95.40 2619.14 643.90 15973.07 00:09:27.911 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:27.911 rmmod nvme_rdma 00:09:27.911 rmmod nvme_fabrics 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3135237 ']' 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3135237 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3135237 ']' 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3135237 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135237 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135237' 00:09:27.911 killing process with pid 3135237 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3135237 00:09:27.911 11:48:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3135237 00:09:27.911 nvmf threads initialize successfully 00:09:27.911 bdev subsystem init successfully 00:09:27.911 created a nvmf target service 00:09:27.911 create targets's poll groups done 00:09:27.911 all subsystems of target started 00:09:27.911 nvmf target is running 00:09:27.911 all subsystems of target stopped 00:09:27.911 destroy targets's poll groups done 00:09:27.911 destroyed the nvmf target service 00:09:27.911 bdev subsystem finish successfully 00:09:27.911 nvmf threads destroy successfully 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.911 00:09:27.911 real 0m18.878s 00:09:27.911 user 0m52.135s 00:09:27.911 sys 0m4.883s 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.911 ************************************ 00:09:27.911 END TEST nvmf_example 00:09:27.911 ************************************ 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:27.911 ************************************ 00:09:27.911 START TEST nvmf_filesystem 00:09:27.911 ************************************ 00:09:27.911 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:27.911 * Looking for test storage... 00:09:27.911 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.912 --rc genhtml_branch_coverage=1 00:09:27.912 --rc genhtml_function_coverage=1 00:09:27.912 --rc genhtml_legend=1 00:09:27.912 --rc geninfo_all_blocks=1 00:09:27.912 --rc geninfo_unexecuted_blocks=1 00:09:27.912 00:09:27.912 ' 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.912 --rc genhtml_branch_coverage=1 00:09:27.912 --rc genhtml_function_coverage=1 00:09:27.912 --rc genhtml_legend=1 00:09:27.912 --rc geninfo_all_blocks=1 00:09:27.912 --rc geninfo_unexecuted_blocks=1 00:09:27.912 00:09:27.912 ' 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.912 --rc genhtml_branch_coverage=1 00:09:27.912 --rc genhtml_function_coverage=1 00:09:27.912 --rc genhtml_legend=1 00:09:27.912 --rc geninfo_all_blocks=1 00:09:27.912 --rc geninfo_unexecuted_blocks=1 00:09:27.912 00:09:27.912 ' 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.912 --rc genhtml_branch_coverage=1 00:09:27.912 --rc genhtml_function_coverage=1 00:09:27.912 --rc genhtml_legend=1 00:09:27.912 --rc geninfo_all_blocks=1 00:09:27.912 --rc geninfo_unexecuted_blocks=1 00:09:27.912 00:09:27.912 ' 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:27.912 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:27.913 #define SPDK_CONFIG_H 00:09:27.913 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:27.913 #define SPDK_CONFIG_APPS 1 00:09:27.913 #define SPDK_CONFIG_ARCH native 00:09:27.913 #undef SPDK_CONFIG_ASAN 00:09:27.913 #undef SPDK_CONFIG_AVAHI 00:09:27.913 #undef SPDK_CONFIG_CET 00:09:27.913 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:27.913 #define SPDK_CONFIG_COVERAGE 1 00:09:27.913 #define SPDK_CONFIG_CROSS_PREFIX 00:09:27.913 #undef SPDK_CONFIG_CRYPTO 00:09:27.913 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:27.913 #undef SPDK_CONFIG_CUSTOMOCF 00:09:27.913 #undef SPDK_CONFIG_DAOS 00:09:27.913 #define SPDK_CONFIG_DAOS_DIR 00:09:27.913 #define SPDK_CONFIG_DEBUG 1 00:09:27.913 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:27.913 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:27.913 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:27.913 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:27.913 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:27.913 #undef SPDK_CONFIG_DPDK_UADK 00:09:27.913 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:27.913 #define SPDK_CONFIG_EXAMPLES 1 00:09:27.913 #undef SPDK_CONFIG_FC 00:09:27.913 #define SPDK_CONFIG_FC_PATH 00:09:27.913 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:27.913 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:27.913 #define SPDK_CONFIG_FSDEV 1 00:09:27.913 #undef SPDK_CONFIG_FUSE 00:09:27.913 #undef SPDK_CONFIG_FUZZER 00:09:27.913 #define SPDK_CONFIG_FUZZER_LIB 00:09:27.913 #undef SPDK_CONFIG_GOLANG 00:09:27.913 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:27.913 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:27.913 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:27.913 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:27.913 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:27.913 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:27.913 #undef SPDK_CONFIG_HAVE_LZ4 00:09:27.913 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:27.913 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:27.913 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:27.913 #define SPDK_CONFIG_IDXD 1 00:09:27.913 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:27.913 #undef SPDK_CONFIG_IPSEC_MB 00:09:27.913 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:27.913 #define SPDK_CONFIG_ISAL 1 00:09:27.913 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:27.913 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:27.913 #define SPDK_CONFIG_LIBDIR 00:09:27.913 #undef SPDK_CONFIG_LTO 00:09:27.913 #define SPDK_CONFIG_MAX_LCORES 128 00:09:27.913 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:27.913 #define SPDK_CONFIG_NVME_CUSE 1 00:09:27.913 #undef SPDK_CONFIG_OCF 00:09:27.913 #define SPDK_CONFIG_OCF_PATH 00:09:27.913 #define SPDK_CONFIG_OPENSSL_PATH 00:09:27.913 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:27.913 #define SPDK_CONFIG_PGO_DIR 00:09:27.913 #undef SPDK_CONFIG_PGO_USE 00:09:27.913 #define SPDK_CONFIG_PREFIX /usr/local 00:09:27.913 #undef SPDK_CONFIG_RAID5F 00:09:27.913 #undef SPDK_CONFIG_RBD 00:09:27.913 #define SPDK_CONFIG_RDMA 1 00:09:27.913 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:27.913 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:27.913 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:27.913 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:27.913 #define SPDK_CONFIG_SHARED 1 00:09:27.913 #undef SPDK_CONFIG_SMA 00:09:27.913 #define SPDK_CONFIG_TESTS 1 00:09:27.913 #undef SPDK_CONFIG_TSAN 00:09:27.913 #define SPDK_CONFIG_UBLK 1 00:09:27.913 #define SPDK_CONFIG_UBSAN 1 00:09:27.913 #undef SPDK_CONFIG_UNIT_TESTS 00:09:27.913 #undef SPDK_CONFIG_URING 00:09:27.913 #define SPDK_CONFIG_URING_PATH 00:09:27.913 #undef SPDK_CONFIG_URING_ZNS 00:09:27.913 #undef SPDK_CONFIG_USDT 00:09:27.913 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:27.913 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:27.913 #undef SPDK_CONFIG_VFIO_USER 00:09:27.913 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:27.913 #define SPDK_CONFIG_VHOST 1 00:09:27.913 #define SPDK_CONFIG_VIRTIO 1 00:09:27.913 #undef SPDK_CONFIG_VTUNE 00:09:27.913 #define SPDK_CONFIG_VTUNE_DIR 00:09:27.913 #define SPDK_CONFIG_WERROR 1 00:09:27.913 #define SPDK_CONFIG_WPDK_DIR 00:09:27.913 #undef SPDK_CONFIG_XNVME 00:09:27.913 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.913 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:27.914 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:27.915 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3137493 ]] 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3137493 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.LdmIj5 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.LdmIj5/tests/target /tmp/spdk.LdmIj5 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189778944000 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6185029632 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97968525312 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=13459456 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:27.916 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169744896 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23052288 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981612032 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=376832 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:27.917 * Looking for test storage... 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189778944000 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8399622144 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.917 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.917 --rc genhtml_branch_coverage=1 00:09:27.917 --rc genhtml_function_coverage=1 00:09:27.917 --rc genhtml_legend=1 00:09:27.917 --rc geninfo_all_blocks=1 00:09:27.917 --rc geninfo_unexecuted_blocks=1 00:09:27.917 00:09:27.917 ' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.917 --rc genhtml_branch_coverage=1 00:09:27.917 --rc genhtml_function_coverage=1 00:09:27.917 --rc genhtml_legend=1 00:09:27.917 --rc geninfo_all_blocks=1 00:09:27.917 --rc geninfo_unexecuted_blocks=1 00:09:27.917 00:09:27.917 ' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.917 --rc genhtml_branch_coverage=1 00:09:27.917 --rc genhtml_function_coverage=1 00:09:27.917 --rc genhtml_legend=1 00:09:27.917 --rc geninfo_all_blocks=1 00:09:27.917 --rc geninfo_unexecuted_blocks=1 00:09:27.917 00:09:27.917 ' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.917 --rc genhtml_branch_coverage=1 00:09:27.917 --rc genhtml_function_coverage=1 00:09:27.917 --rc genhtml_legend=1 00:09:27.917 --rc geninfo_all_blocks=1 00:09:27.917 --rc geninfo_unexecuted_blocks=1 00:09:27.917 00:09:27.917 ' 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.917 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.918 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.918 11:48:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:34.491 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:34.491 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:34.491 Found net devices under 0000:da:00.0: mlx_0_0 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:34.491 Found net devices under 0000:da:00.1: mlx_0_1 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:34.491 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:34.492 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.492 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:09:34.492 altname enp218s0f0np0 00:09:34.492 altname ens818f0np0 00:09:34.492 inet 192.168.100.8/24 scope global mlx_0_0 00:09:34.492 valid_lft forever preferred_lft forever 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:34.492 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.492 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:09:34.492 altname enp218s0f1np1 00:09:34.492 altname ens818f1np1 00:09:34.492 inet 192.168.100.9/24 scope global mlx_0_1 00:09:34.492 valid_lft forever preferred_lft forever 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:34.492 192.168.100.9' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:34.492 192.168.100.9' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:34.492 192.168.100.9' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 ************************************ 00:09:34.492 START TEST nvmf_filesystem_no_in_capsule 00:09:34.492 ************************************ 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:34.492 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3140560 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3140560 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3140560 ']' 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 [2024-12-09 11:48:41.680952] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:09:34.493 [2024-12-09 11:48:41.680997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.493 [2024-12-09 11:48:41.761134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.493 [2024-12-09 11:48:41.803094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.493 [2024-12-09 11:48:41.803131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.493 [2024-12-09 11:48:41.803139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.493 [2024-12-09 11:48:41.803145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.493 [2024-12-09 11:48:41.803152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.493 [2024-12-09 11:48:41.804726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.493 [2024-12-09 11:48:41.804846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.493 [2024-12-09 11:48:41.804921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.493 [2024-12-09 11:48:41.804922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.493 11:48:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 [2024-12-09 11:48:41.951290] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:34.493 [2024-12-09 11:48:41.973931] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe43940/0xe47e30) succeed. 00:09:34.493 [2024-12-09 11:48:41.985283] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe44fd0/0xe894d0) succeed. 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 Malloc1 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 [2024-12-09 11:48:42.230115] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:34.493 { 00:09:34.493 "name": "Malloc1", 00:09:34.493 "aliases": [ 00:09:34.493 "b6d4fd3c-9975-48a0-8262-e51d17d937c9" 00:09:34.493 ], 00:09:34.493 "product_name": "Malloc disk", 00:09:34.493 "block_size": 512, 00:09:34.493 "num_blocks": 1048576, 00:09:34.493 "uuid": "b6d4fd3c-9975-48a0-8262-e51d17d937c9", 00:09:34.493 "assigned_rate_limits": { 00:09:34.493 "rw_ios_per_sec": 0, 00:09:34.493 "rw_mbytes_per_sec": 0, 00:09:34.493 "r_mbytes_per_sec": 0, 00:09:34.493 "w_mbytes_per_sec": 0 00:09:34.493 }, 00:09:34.493 "claimed": true, 00:09:34.493 "claim_type": "exclusive_write", 00:09:34.493 "zoned": false, 00:09:34.493 "supported_io_types": { 00:09:34.493 "read": true, 00:09:34.493 "write": true, 00:09:34.493 "unmap": true, 00:09:34.493 "flush": true, 00:09:34.493 "reset": true, 00:09:34.493 "nvme_admin": false, 00:09:34.493 "nvme_io": false, 00:09:34.493 "nvme_io_md": false, 00:09:34.493 "write_zeroes": true, 00:09:34.493 "zcopy": true, 00:09:34.493 "get_zone_info": false, 00:09:34.493 "zone_management": false, 00:09:34.493 "zone_append": false, 00:09:34.493 "compare": false, 00:09:34.493 "compare_and_write": false, 00:09:34.493 "abort": true, 00:09:34.493 "seek_hole": false, 00:09:34.493 "seek_data": false, 00:09:34.493 "copy": true, 00:09:34.493 "nvme_iov_md": false 00:09:34.493 }, 00:09:34.493 "memory_domains": [ 00:09:34.493 { 00:09:34.493 "dma_device_id": "system", 00:09:34.493 "dma_device_type": 1 00:09:34.493 }, 00:09:34.493 { 00:09:34.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.493 "dma_device_type": 2 00:09:34.493 } 00:09:34.493 ], 00:09:34.493 "driver_specific": {} 00:09:34.493 } 00:09:34.493 ]' 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:34.493 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:34.494 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:34.494 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:34.494 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:34.494 11:48:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:35.430 11:48:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:35.430 11:48:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:35.430 11:48:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:35.430 11:48:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:35.430 11:48:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:37.342 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:37.604 11:48:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.540 ************************************ 00:09:38.540 START TEST filesystem_ext4 00:09:38.540 ************************************ 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:38.540 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:38.540 mke2fs 1.47.0 (5-Feb-2023) 00:09:38.799 Discarding device blocks: 0/522240 done 00:09:38.799 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:38.799 Filesystem UUID: ac4514fa-4615-41e4-a3c6-2c1f7e0c9fad 00:09:38.799 Superblock backups stored on blocks: 00:09:38.799 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:38.800 00:09:38.800 Allocating group tables: 0/64 done 00:09:38.800 Writing inode tables: 0/64 done 00:09:38.800 Creating journal (8192 blocks): done 00:09:38.800 Writing superblocks and filesystem accounting information: 0/64 done 00:09:38.800 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3140560 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:38.800 00:09:38.800 real 0m0.184s 00:09:38.800 user 0m0.026s 00:09:38.800 sys 0m0.064s 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:38.800 ************************************ 00:09:38.800 END TEST filesystem_ext4 00:09:38.800 ************************************ 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.800 ************************************ 00:09:38.800 START TEST filesystem_btrfs 00:09:38.800 ************************************ 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:38.800 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:39.059 btrfs-progs v6.8.1 00:09:39.059 See https://btrfs.readthedocs.io for more information. 00:09:39.059 00:09:39.059 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:39.059 NOTE: several default settings have changed in version 5.15, please make sure 00:09:39.059 this does not affect your deployments: 00:09:39.059 - DUP for metadata (-m dup) 00:09:39.059 - enabled no-holes (-O no-holes) 00:09:39.059 - enabled free-space-tree (-R free-space-tree) 00:09:39.059 00:09:39.059 Label: (null) 00:09:39.059 UUID: 94cf6ce8-c337-4efe-a37b-e0835faf60b0 00:09:39.059 Node size: 16384 00:09:39.059 Sector size: 4096 (CPU page size: 4096) 00:09:39.059 Filesystem size: 510.00MiB 00:09:39.059 Block group profiles: 00:09:39.059 Data: single 8.00MiB 00:09:39.059 Metadata: DUP 32.00MiB 00:09:39.059 System: DUP 8.00MiB 00:09:39.059 SSD detected: yes 00:09:39.059 Zoned device: no 00:09:39.059 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:39.059 Checksum: crc32c 00:09:39.059 Number of devices: 1 00:09:39.059 Devices: 00:09:39.059 ID SIZE PATH 00:09:39.059 1 510.00MiB /dev/nvme0n1p1 00:09:39.059 00:09:39.059 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:39.059 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:39.059 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:39.059 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:39.059 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:39.059 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:39.059 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:39.059 11:48:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:39.059 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3140560 00:09:39.059 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:39.059 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:39.059 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:39.059 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:39.059 00:09:39.059 real 0m0.231s 00:09:39.059 user 0m0.023s 00:09:39.059 sys 0m0.115s 00:09:39.059 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.059 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:39.060 ************************************ 00:09:39.060 END TEST filesystem_btrfs 00:09:39.060 ************************************ 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.060 ************************************ 00:09:39.060 START TEST filesystem_xfs 00:09:39.060 ************************************ 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:39.060 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:39.319 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:39.319 = sectsz=512 attr=2, projid32bit=1 00:09:39.319 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:39.319 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:39.319 data = bsize=4096 blocks=130560, imaxpct=25 00:09:39.319 = sunit=0 swidth=0 blks 00:09:39.319 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:39.319 log =internal log bsize=4096 blocks=16384, version=2 00:09:39.319 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:39.319 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:39.319 Discarding blocks...Done. 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3140560 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:39.319 00:09:39.319 real 0m0.196s 00:09:39.319 user 0m0.029s 00:09:39.319 sys 0m0.061s 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:39.319 ************************************ 00:09:39.319 END TEST filesystem_xfs 00:09:39.319 ************************************ 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:39.319 11:48:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.256 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.256 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:40.256 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:40.256 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3140560 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3140560 ']' 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3140560 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3140560 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3140560' 00:09:40.515 killing process with pid 3140560 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3140560 00:09:40.515 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3140560 00:09:40.775 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:40.775 00:09:40.775 real 0m7.140s 00:09:40.775 user 0m27.807s 00:09:40.775 sys 0m1.029s 00:09:40.775 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.775 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.775 ************************************ 00:09:40.775 END TEST nvmf_filesystem_no_in_capsule 00:09:40.775 ************************************ 00:09:40.775 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:40.775 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.775 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.775 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.034 ************************************ 00:09:41.034 START TEST nvmf_filesystem_in_capsule 00:09:41.034 ************************************ 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3142017 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3142017 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3142017 ']' 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.034 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.035 11:48:48 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.035 [2024-12-09 11:48:48.893671] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:09:41.035 [2024-12-09 11:48:48.893713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.035 [2024-12-09 11:48:48.970645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.035 [2024-12-09 11:48:49.013017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.035 [2024-12-09 11:48:49.013055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.035 [2024-12-09 11:48:49.013062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.035 [2024-12-09 11:48:49.013068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.035 [2024-12-09 11:48:49.013073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.035 [2024-12-09 11:48:49.014580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.035 [2024-12-09 11:48:49.014685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.035 [2024-12-09 11:48:49.014796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.035 [2024-12-09 11:48:49.014796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.294 [2024-12-09 11:48:49.171495] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d15940/0x1d19e30) succeed. 00:09:41.294 [2024-12-09 11:48:49.182819] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d16fd0/0x1d5b4d0) succeed. 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.294 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.555 Malloc1 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.555 [2024-12-09 11:48:49.464399] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:41.555 { 00:09:41.555 "name": "Malloc1", 00:09:41.555 "aliases": [ 00:09:41.555 "6c113b25-03d5-4582-8641-21d87a04baa9" 00:09:41.555 ], 00:09:41.555 "product_name": "Malloc disk", 00:09:41.555 "block_size": 512, 00:09:41.555 "num_blocks": 1048576, 00:09:41.555 "uuid": "6c113b25-03d5-4582-8641-21d87a04baa9", 00:09:41.555 "assigned_rate_limits": { 00:09:41.555 "rw_ios_per_sec": 0, 00:09:41.555 "rw_mbytes_per_sec": 0, 00:09:41.555 "r_mbytes_per_sec": 0, 00:09:41.555 "w_mbytes_per_sec": 0 00:09:41.555 }, 00:09:41.555 "claimed": true, 00:09:41.555 "claim_type": "exclusive_write", 00:09:41.555 "zoned": false, 00:09:41.555 "supported_io_types": { 00:09:41.555 "read": true, 00:09:41.555 "write": true, 00:09:41.555 "unmap": true, 00:09:41.555 "flush": true, 00:09:41.555 "reset": true, 00:09:41.555 "nvme_admin": false, 00:09:41.555 "nvme_io": false, 00:09:41.555 "nvme_io_md": false, 00:09:41.555 "write_zeroes": true, 00:09:41.555 "zcopy": true, 00:09:41.555 "get_zone_info": false, 00:09:41.555 "zone_management": false, 00:09:41.555 "zone_append": false, 00:09:41.555 "compare": false, 00:09:41.555 "compare_and_write": false, 00:09:41.555 "abort": true, 00:09:41.555 "seek_hole": false, 00:09:41.555 "seek_data": false, 00:09:41.555 "copy": true, 00:09:41.555 "nvme_iov_md": false 00:09:41.555 }, 00:09:41.555 "memory_domains": [ 00:09:41.555 { 00:09:41.555 "dma_device_id": "system", 00:09:41.555 "dma_device_type": 1 00:09:41.555 }, 00:09:41.555 { 00:09:41.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.555 "dma_device_type": 2 00:09:41.555 } 00:09:41.555 ], 00:09:41.555 "driver_specific": {} 00:09:41.555 } 00:09:41.555 ]' 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:41.555 11:48:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:42.932 11:48:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.932 11:48:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:42.932 11:48:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.932 11:48:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:42.932 11:48:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:44.836 11:48:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.773 ************************************ 00:09:45.773 START TEST filesystem_in_capsule_ext4 00:09:45.773 ************************************ 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:45.773 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:45.773 mke2fs 1.47.0 (5-Feb-2023) 00:09:46.033 Discarding device blocks: 0/522240 done 00:09:46.033 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:46.033 Filesystem UUID: bf43ad8d-fb9b-498e-8a93-0a119dee6027 00:09:46.033 Superblock backups stored on blocks: 00:09:46.033 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:46.033 00:09:46.033 Allocating group tables: 0/64 done 00:09:46.033 Writing inode tables: 0/64 done 00:09:46.033 Creating journal (8192 blocks): done 00:09:46.033 Writing superblocks and filesystem accounting information: 0/64 done 00:09:46.033 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3142017 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:46.033 00:09:46.033 real 0m0.186s 00:09:46.033 user 0m0.023s 00:09:46.033 sys 0m0.070s 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:46.033 ************************************ 00:09:46.033 END TEST filesystem_in_capsule_ext4 00:09:46.033 ************************************ 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.033 11:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.033 ************************************ 00:09:46.033 START TEST filesystem_in_capsule_btrfs 00:09:46.033 ************************************ 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:46.033 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:46.293 btrfs-progs v6.8.1 00:09:46.293 See https://btrfs.readthedocs.io for more information. 00:09:46.293 00:09:46.293 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:46.293 NOTE: several default settings have changed in version 5.15, please make sure 00:09:46.293 this does not affect your deployments: 00:09:46.293 - DUP for metadata (-m dup) 00:09:46.293 - enabled no-holes (-O no-holes) 00:09:46.293 - enabled free-space-tree (-R free-space-tree) 00:09:46.293 00:09:46.293 Label: (null) 00:09:46.293 UUID: 8e7e7d4e-dd47-4431-b18b-0d56ea5737f2 00:09:46.293 Node size: 16384 00:09:46.293 Sector size: 4096 (CPU page size: 4096) 00:09:46.293 Filesystem size: 510.00MiB 00:09:46.293 Block group profiles: 00:09:46.293 Data: single 8.00MiB 00:09:46.293 Metadata: DUP 32.00MiB 00:09:46.293 System: DUP 8.00MiB 00:09:46.293 SSD detected: yes 00:09:46.293 Zoned device: no 00:09:46.293 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:46.293 Checksum: crc32c 00:09:46.293 Number of devices: 1 00:09:46.293 Devices: 00:09:46.293 ID SIZE PATH 00:09:46.293 1 510.00MiB /dev/nvme0n1p1 00:09:46.293 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3142017 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:46.293 00:09:46.293 real 0m0.235s 00:09:46.293 user 0m0.026s 00:09:46.293 sys 0m0.115s 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:46.293 ************************************ 00:09:46.293 END TEST filesystem_in_capsule_btrfs 00:09:46.293 ************************************ 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.293 ************************************ 00:09:46.293 START TEST filesystem_in_capsule_xfs 00:09:46.293 ************************************ 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:46.293 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:46.553 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:46.553 = sectsz=512 attr=2, projid32bit=1 00:09:46.553 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:46.553 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:46.553 data = bsize=4096 blocks=130560, imaxpct=25 00:09:46.553 = sunit=0 swidth=0 blks 00:09:46.553 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:46.553 log =internal log bsize=4096 blocks=16384, version=2 00:09:46.553 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:46.553 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:46.553 Discarding blocks...Done. 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3142017 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:46.553 00:09:46.553 real 0m0.195s 00:09:46.553 user 0m0.020s 00:09:46.553 sys 0m0.072s 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:46.553 ************************************ 00:09:46.553 END TEST filesystem_in_capsule_xfs 00:09:46.553 ************************************ 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:46.553 11:48:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:47.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.490 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:47.490 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:47.490 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:47.490 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.490 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:47.490 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3142017 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3142017 ']' 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3142017 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3142017 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3142017' 00:09:47.749 killing process with pid 3142017 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3142017 00:09:47.749 11:48:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3142017 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:48.008 00:09:48.008 real 0m7.179s 00:09:48.008 user 0m27.892s 00:09:48.008 sys 0m1.052s 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.008 ************************************ 00:09:48.008 END TEST nvmf_filesystem_in_capsule 00:09:48.008 ************************************ 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:48.008 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:48.267 rmmod nvme_rdma 00:09:48.267 rmmod nvme_fabrics 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:48.267 00:09:48.267 real 0m20.878s 00:09:48.267 user 0m57.813s 00:09:48.267 sys 0m6.685s 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:48.267 ************************************ 00:09:48.267 END TEST nvmf_filesystem 00:09:48.267 ************************************ 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:48.267 ************************************ 00:09:48.267 START TEST nvmf_target_discovery 00:09:48.267 ************************************ 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:48.267 * Looking for test storage... 00:09:48.267 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:09:48.267 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:48.527 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:48.527 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.527 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.527 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.527 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:48.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.528 --rc genhtml_branch_coverage=1 00:09:48.528 --rc genhtml_function_coverage=1 00:09:48.528 --rc genhtml_legend=1 00:09:48.528 --rc geninfo_all_blocks=1 00:09:48.528 --rc geninfo_unexecuted_blocks=1 00:09:48.528 00:09:48.528 ' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:48.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.528 --rc genhtml_branch_coverage=1 00:09:48.528 --rc genhtml_function_coverage=1 00:09:48.528 --rc genhtml_legend=1 00:09:48.528 --rc geninfo_all_blocks=1 00:09:48.528 --rc geninfo_unexecuted_blocks=1 00:09:48.528 00:09:48.528 ' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:48.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.528 --rc genhtml_branch_coverage=1 00:09:48.528 --rc genhtml_function_coverage=1 00:09:48.528 --rc genhtml_legend=1 00:09:48.528 --rc geninfo_all_blocks=1 00:09:48.528 --rc geninfo_unexecuted_blocks=1 00:09:48.528 00:09:48.528 ' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:48.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.528 --rc genhtml_branch_coverage=1 00:09:48.528 --rc genhtml_function_coverage=1 00:09:48.528 --rc genhtml_legend=1 00:09:48.528 --rc geninfo_all_blocks=1 00:09:48.528 --rc geninfo_unexecuted_blocks=1 00:09:48.528 00:09:48.528 ' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.528 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:48.528 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.529 11:48:56 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:09:55.100 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:55.101 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:55.101 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:55.101 Found net devices under 0000:da:00.0: mlx_0_0 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:55.101 Found net devices under 0000:da:00.1: mlx_0_1 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:55.101 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:55.102 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:55.102 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:09:55.102 altname enp218s0f0np0 00:09:55.102 altname ens818f0np0 00:09:55.102 inet 192.168.100.8/24 scope global mlx_0_0 00:09:55.102 valid_lft forever preferred_lft forever 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:55.102 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:55.102 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:09:55.102 altname enp218s0f1np1 00:09:55.102 altname ens818f1np1 00:09:55.102 inet 192.168.100.9/24 scope global mlx_0_1 00:09:55.102 valid_lft forever preferred_lft forever 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:55.102 192.168.100.9' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:55.102 192.168.100.9' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:55.102 192.168.100.9' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3146567 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3146567 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3146567 ']' 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 [2024-12-09 11:49:02.345559] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:09:55.102 [2024-12-09 11:49:02.345605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.102 [2024-12-09 11:49:02.424767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.102 [2024-12-09 11:49:02.464195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.102 [2024-12-09 11:49:02.464233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.102 [2024-12-09 11:49:02.464240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.102 [2024-12-09 11:49:02.464246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.102 [2024-12-09 11:49:02.464250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.102 [2024-12-09 11:49:02.465797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.102 [2024-12-09 11:49:02.465919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.102 [2024-12-09 11:49:02.465953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.102 [2024-12-09 11:49:02.465954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.102 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 [2024-12-09 11:49:02.640115] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc79940/0xc7de30) succeed. 00:09:55.102 [2024-12-09 11:49:02.651819] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc7afd0/0xcbf4d0) succeed. 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 Null1 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 [2024-12-09 11:49:02.825339] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 Null2 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 Null3 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 Null4 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 11:49:02 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:09:55.103 00:09:55.103 Discovery Log Number of Records 6, Generation counter 6 00:09:55.103 =====Discovery Log Entry 0====== 00:09:55.103 trtype: rdma 00:09:55.103 adrfam: ipv4 00:09:55.103 subtype: current discovery subsystem 00:09:55.103 treq: not required 00:09:55.103 portid: 0 00:09:55.103 trsvcid: 4420 00:09:55.103 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:55.103 traddr: 192.168.100.8 00:09:55.103 eflags: explicit discovery connections, duplicate discovery information 00:09:55.103 rdma_prtype: not specified 00:09:55.103 rdma_qptype: connected 00:09:55.103 rdma_cms: rdma-cm 00:09:55.103 rdma_pkey: 0x0000 00:09:55.103 =====Discovery Log Entry 1====== 00:09:55.103 trtype: rdma 00:09:55.103 adrfam: ipv4 00:09:55.103 subtype: nvme subsystem 00:09:55.103 treq: not required 00:09:55.103 portid: 0 00:09:55.103 trsvcid: 4420 00:09:55.103 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:55.103 traddr: 192.168.100.8 00:09:55.103 eflags: none 00:09:55.103 rdma_prtype: not specified 00:09:55.103 rdma_qptype: connected 00:09:55.103 rdma_cms: rdma-cm 00:09:55.103 rdma_pkey: 0x0000 00:09:55.103 =====Discovery Log Entry 2====== 00:09:55.103 trtype: rdma 00:09:55.103 adrfam: ipv4 00:09:55.103 subtype: nvme subsystem 00:09:55.103 treq: not required 00:09:55.103 portid: 0 00:09:55.103 trsvcid: 4420 00:09:55.103 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:55.103 traddr: 192.168.100.8 00:09:55.103 eflags: none 00:09:55.103 rdma_prtype: not specified 00:09:55.103 rdma_qptype: connected 00:09:55.103 rdma_cms: rdma-cm 00:09:55.103 rdma_pkey: 0x0000 00:09:55.103 =====Discovery Log Entry 3====== 00:09:55.103 trtype: rdma 00:09:55.103 adrfam: ipv4 00:09:55.104 subtype: nvme subsystem 00:09:55.104 treq: not required 00:09:55.104 portid: 0 00:09:55.104 trsvcid: 4420 00:09:55.104 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:55.104 traddr: 192.168.100.8 00:09:55.104 eflags: none 00:09:55.104 rdma_prtype: not specified 00:09:55.104 rdma_qptype: connected 00:09:55.104 rdma_cms: rdma-cm 00:09:55.104 rdma_pkey: 0x0000 00:09:55.104 =====Discovery Log Entry 4====== 00:09:55.104 trtype: rdma 00:09:55.104 adrfam: ipv4 00:09:55.104 subtype: nvme subsystem 00:09:55.104 treq: not required 00:09:55.104 portid: 0 00:09:55.104 trsvcid: 4420 00:09:55.104 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:55.104 traddr: 192.168.100.8 00:09:55.104 eflags: none 00:09:55.104 rdma_prtype: not specified 00:09:55.104 rdma_qptype: connected 00:09:55.104 rdma_cms: rdma-cm 00:09:55.104 rdma_pkey: 0x0000 00:09:55.104 =====Discovery Log Entry 5====== 00:09:55.104 trtype: rdma 00:09:55.104 adrfam: ipv4 00:09:55.104 subtype: discovery subsystem referral 00:09:55.104 treq: not required 00:09:55.104 portid: 0 00:09:55.104 trsvcid: 4430 00:09:55.104 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:55.104 traddr: 192.168.100.8 00:09:55.104 eflags: none 00:09:55.104 rdma_prtype: unrecognized 00:09:55.104 rdma_qptype: unrecognized 00:09:55.104 rdma_cms: unrecognized 00:09:55.104 rdma_pkey: 0x0000 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:55.104 Perform nvmf subsystem discovery via RPC 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.104 [ 00:09:55.104 { 00:09:55.104 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:55.104 "subtype": "Discovery", 00:09:55.104 "listen_addresses": [ 00:09:55.104 { 00:09:55.104 "trtype": "RDMA", 00:09:55.104 "adrfam": "IPv4", 00:09:55.104 "traddr": "192.168.100.8", 00:09:55.104 "trsvcid": "4420" 00:09:55.104 } 00:09:55.104 ], 00:09:55.104 "allow_any_host": true, 00:09:55.104 "hosts": [] 00:09:55.104 }, 00:09:55.104 { 00:09:55.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.104 "subtype": "NVMe", 00:09:55.104 "listen_addresses": [ 00:09:55.104 { 00:09:55.104 "trtype": "RDMA", 00:09:55.104 "adrfam": "IPv4", 00:09:55.104 "traddr": "192.168.100.8", 00:09:55.104 "trsvcid": "4420" 00:09:55.104 } 00:09:55.104 ], 00:09:55.104 "allow_any_host": true, 00:09:55.104 "hosts": [], 00:09:55.104 "serial_number": "SPDK00000000000001", 00:09:55.104 "model_number": "SPDK bdev Controller", 00:09:55.104 "max_namespaces": 32, 00:09:55.104 "min_cntlid": 1, 00:09:55.104 "max_cntlid": 65519, 00:09:55.104 "namespaces": [ 00:09:55.104 { 00:09:55.104 "nsid": 1, 00:09:55.104 "bdev_name": "Null1", 00:09:55.104 "name": "Null1", 00:09:55.104 "nguid": "5C89D832CD234F8E8334D7684D850296", 00:09:55.104 "uuid": "5c89d832-cd23-4f8e-8334-d7684d850296" 00:09:55.104 } 00:09:55.104 ] 00:09:55.104 }, 00:09:55.104 { 00:09:55.104 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:55.104 "subtype": "NVMe", 00:09:55.104 "listen_addresses": [ 00:09:55.104 { 00:09:55.104 "trtype": "RDMA", 00:09:55.104 "adrfam": "IPv4", 00:09:55.104 "traddr": "192.168.100.8", 00:09:55.104 "trsvcid": "4420" 00:09:55.104 } 00:09:55.104 ], 00:09:55.104 "allow_any_host": true, 00:09:55.104 "hosts": [], 00:09:55.104 "serial_number": "SPDK00000000000002", 00:09:55.104 "model_number": "SPDK bdev Controller", 00:09:55.104 "max_namespaces": 32, 00:09:55.104 "min_cntlid": 1, 00:09:55.104 "max_cntlid": 65519, 00:09:55.104 "namespaces": [ 00:09:55.104 { 00:09:55.104 "nsid": 1, 00:09:55.104 "bdev_name": "Null2", 00:09:55.104 "name": "Null2", 00:09:55.104 "nguid": "EF2C6122CEC540A688CC82E0044D0896", 00:09:55.104 "uuid": "ef2c6122-cec5-40a6-88cc-82e0044d0896" 00:09:55.104 } 00:09:55.104 ] 00:09:55.104 }, 00:09:55.104 { 00:09:55.104 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:55.104 "subtype": "NVMe", 00:09:55.104 "listen_addresses": [ 00:09:55.104 { 00:09:55.104 "trtype": "RDMA", 00:09:55.104 "adrfam": "IPv4", 00:09:55.104 "traddr": "192.168.100.8", 00:09:55.104 "trsvcid": "4420" 00:09:55.104 } 00:09:55.104 ], 00:09:55.104 "allow_any_host": true, 00:09:55.104 "hosts": [], 00:09:55.104 "serial_number": "SPDK00000000000003", 00:09:55.104 "model_number": "SPDK bdev Controller", 00:09:55.104 "max_namespaces": 32, 00:09:55.104 "min_cntlid": 1, 00:09:55.104 "max_cntlid": 65519, 00:09:55.104 "namespaces": [ 00:09:55.104 { 00:09:55.104 "nsid": 1, 00:09:55.104 "bdev_name": "Null3", 00:09:55.104 "name": "Null3", 00:09:55.104 "nguid": "33BABAF4F4BE4966AB918824825DD91D", 00:09:55.104 "uuid": "33babaf4-f4be-4966-ab91-8824825dd91d" 00:09:55.104 } 00:09:55.104 ] 00:09:55.104 }, 00:09:55.104 { 00:09:55.104 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:55.104 "subtype": "NVMe", 00:09:55.104 "listen_addresses": [ 00:09:55.104 { 00:09:55.104 "trtype": "RDMA", 00:09:55.104 "adrfam": "IPv4", 00:09:55.104 "traddr": "192.168.100.8", 00:09:55.104 "trsvcid": "4420" 00:09:55.104 } 00:09:55.104 ], 00:09:55.104 "allow_any_host": true, 00:09:55.104 "hosts": [], 00:09:55.104 "serial_number": "SPDK00000000000004", 00:09:55.104 "model_number": "SPDK bdev Controller", 00:09:55.104 "max_namespaces": 32, 00:09:55.104 "min_cntlid": 1, 00:09:55.104 "max_cntlid": 65519, 00:09:55.104 "namespaces": [ 00:09:55.104 { 00:09:55.104 "nsid": 1, 00:09:55.104 "bdev_name": "Null4", 00:09:55.104 "name": "Null4", 00:09:55.104 "nguid": "87B0F6E6BD8948839461DE3B96D43F61", 00:09:55.104 "uuid": "87b0f6e6-bd89-4883-9461-de3b96d43f61" 00:09:55.104 } 00:09:55.104 ] 00:09:55.104 } 00:09:55.104 ] 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:55.104 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.105 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:55.364 rmmod nvme_rdma 00:09:55.364 rmmod nvme_fabrics 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3146567 ']' 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3146567 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3146567 ']' 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3146567 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146567 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146567' 00:09:55.364 killing process with pid 3146567 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3146567 00:09:55.364 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3146567 00:09:55.623 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.623 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:55.623 00:09:55.623 real 0m7.377s 00:09:55.623 user 0m6.095s 00:09:55.623 sys 0m4.846s 00:09:55.623 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.623 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:55.623 ************************************ 00:09:55.623 END TEST nvmf_target_discovery 00:09:55.623 ************************************ 00:09:55.623 11:49:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:55.623 11:49:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.623 11:49:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.623 11:49:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:55.623 ************************************ 00:09:55.623 START TEST nvmf_referrals 00:09:55.623 ************************************ 00:09:55.623 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:55.883 * Looking for test storage... 00:09:55.883 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.883 --rc genhtml_branch_coverage=1 00:09:55.883 --rc genhtml_function_coverage=1 00:09:55.883 --rc genhtml_legend=1 00:09:55.883 --rc geninfo_all_blocks=1 00:09:55.883 --rc geninfo_unexecuted_blocks=1 00:09:55.883 00:09:55.883 ' 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.883 --rc genhtml_branch_coverage=1 00:09:55.883 --rc genhtml_function_coverage=1 00:09:55.883 --rc genhtml_legend=1 00:09:55.883 --rc geninfo_all_blocks=1 00:09:55.883 --rc geninfo_unexecuted_blocks=1 00:09:55.883 00:09:55.883 ' 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.883 --rc genhtml_branch_coverage=1 00:09:55.883 --rc genhtml_function_coverage=1 00:09:55.883 --rc genhtml_legend=1 00:09:55.883 --rc geninfo_all_blocks=1 00:09:55.883 --rc geninfo_unexecuted_blocks=1 00:09:55.883 00:09:55.883 ' 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.883 --rc genhtml_branch_coverage=1 00:09:55.883 --rc genhtml_function_coverage=1 00:09:55.883 --rc genhtml_legend=1 00:09:55.883 --rc geninfo_all_blocks=1 00:09:55.883 --rc geninfo_unexecuted_blocks=1 00:09:55.883 00:09:55.883 ' 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.883 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.884 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.884 11:49:03 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:02.456 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:02.456 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:02.456 Found net devices under 0000:da:00.0: mlx_0_0 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.456 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:02.456 Found net devices under 0000:da:00.1: mlx_0_1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:02.457 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:02.457 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:02.457 altname enp218s0f0np0 00:10:02.457 altname ens818f0np0 00:10:02.457 inet 192.168.100.8/24 scope global mlx_0_0 00:10:02.457 valid_lft forever preferred_lft forever 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:02.457 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:02.457 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:02.457 altname enp218s0f1np1 00:10:02.457 altname ens818f1np1 00:10:02.457 inet 192.168.100.9/24 scope global mlx_0_1 00:10:02.457 valid_lft forever preferred_lft forever 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:02.457 192.168.100.9' 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:02.457 192.168.100.9' 00:10:02.457 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:02.458 192.168.100.9' 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3149891 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3149891 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3149891 ']' 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.458 11:49:09 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 [2024-12-09 11:49:09.781873] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:10:02.458 [2024-12-09 11:49:09.781927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.458 [2024-12-09 11:49:09.858473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.458 [2024-12-09 11:49:09.900064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.458 [2024-12-09 11:49:09.900102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.458 [2024-12-09 11:49:09.900108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.458 [2024-12-09 11:49:09.900114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.458 [2024-12-09 11:49:09.900119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.458 [2024-12-09 11:49:09.901528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.458 [2024-12-09 11:49:09.901636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.458 [2024-12-09 11:49:09.901741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.458 [2024-12-09 11:49:09.901742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 [2024-12-09 11:49:10.070772] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xaeb940/0xaefe30) succeed. 00:10:02.458 [2024-12-09 11:49:10.082142] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xaecfd0/0xb314d0) succeed. 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 [2024-12-09 11:49:10.224451] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:10:02.458 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.459 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.718 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:02.719 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:02.978 11:49:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.978 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:03.238 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:03.497 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:03.498 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.498 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:03.498 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:03.498 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:03.498 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:03.498 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.498 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:03.498 rmmod nvme_rdma 00:10:03.757 rmmod nvme_fabrics 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3149891 ']' 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3149891 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3149891 ']' 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3149891 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3149891 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3149891' 00:10:03.757 killing process with pid 3149891 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3149891 00:10:03.757 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3149891 00:10:04.017 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.017 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:04.017 00:10:04.017 real 0m8.242s 00:10:04.017 user 0m10.384s 00:10:04.017 sys 0m5.065s 00:10:04.017 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.017 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:04.017 ************************************ 00:10:04.017 END TEST nvmf_referrals 00:10:04.017 ************************************ 00:10:04.017 11:49:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:10:04.017 11:49:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.017 11:49:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.017 11:49:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.017 ************************************ 00:10:04.017 START TEST nvmf_connect_disconnect 00:10:04.017 ************************************ 00:10:04.017 11:49:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:10:04.017 * Looking for test storage... 00:10:04.017 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:04.017 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.017 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.017 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.278 --rc genhtml_branch_coverage=1 00:10:04.278 --rc genhtml_function_coverage=1 00:10:04.278 --rc genhtml_legend=1 00:10:04.278 --rc geninfo_all_blocks=1 00:10:04.278 --rc geninfo_unexecuted_blocks=1 00:10:04.278 00:10:04.278 ' 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.278 --rc genhtml_branch_coverage=1 00:10:04.278 --rc genhtml_function_coverage=1 00:10:04.278 --rc genhtml_legend=1 00:10:04.278 --rc geninfo_all_blocks=1 00:10:04.278 --rc geninfo_unexecuted_blocks=1 00:10:04.278 00:10:04.278 ' 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.278 --rc genhtml_branch_coverage=1 00:10:04.278 --rc genhtml_function_coverage=1 00:10:04.278 --rc genhtml_legend=1 00:10:04.278 --rc geninfo_all_blocks=1 00:10:04.278 --rc geninfo_unexecuted_blocks=1 00:10:04.278 00:10:04.278 ' 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.278 --rc genhtml_branch_coverage=1 00:10:04.278 --rc genhtml_function_coverage=1 00:10:04.278 --rc genhtml_legend=1 00:10:04.278 --rc geninfo_all_blocks=1 00:10:04.278 --rc geninfo_unexecuted_blocks=1 00:10:04.278 00:10:04.278 ' 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.278 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.279 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.279 11:49:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:10.852 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.852 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.852 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.852 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.852 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:10.853 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:10.853 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:10.853 Found net devices under 0000:da:00.0: mlx_0_0 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:10.853 Found net devices under 0000:da:00.1: mlx_0_1 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:10.853 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:10.854 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:10.854 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:10.854 altname enp218s0f0np0 00:10:10.854 altname ens818f0np0 00:10:10.854 inet 192.168.100.8/24 scope global mlx_0_0 00:10:10.854 valid_lft forever preferred_lft forever 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:10.854 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:10.854 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:10.854 altname enp218s0f1np1 00:10:10.854 altname ens818f1np1 00:10:10.854 inet 192.168.100.9/24 scope global mlx_0_1 00:10:10.854 valid_lft forever preferred_lft forever 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:10.854 192.168.100.9' 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:10.854 192.168.100.9' 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:10.854 192.168.100.9' 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:10:10.854 11:49:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3153514 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3153514 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3153514 ']' 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:10.854 [2024-12-09 11:49:18.089577] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:10:10.854 [2024-12-09 11:49:18.089637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.854 [2024-12-09 11:49:18.166357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.854 [2024-12-09 11:49:18.207503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.854 [2024-12-09 11:49:18.207541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.854 [2024-12-09 11:49:18.207548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.854 [2024-12-09 11:49:18.207553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.854 [2024-12-09 11:49:18.207559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.854 [2024-12-09 11:49:18.209000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.854 [2024-12-09 11:49:18.209107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.854 [2024-12-09 11:49:18.209193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.854 [2024-12-09 11:49:18.209194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.854 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:10.855 [2024-12-09 11:49:18.355742] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:10.855 [2024-12-09 11:49:18.378394] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x138f940/0x1393e30) succeed. 00:10:10.855 [2024-12-09 11:49:18.389845] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1390fd0/0x13d54d0) succeed. 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:10.855 [2024-12-09 11:49:18.548788] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:10.855 11:49:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:15.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:30.908 rmmod nvme_rdma 00:10:30.908 rmmod nvme_fabrics 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3153514 ']' 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3153514 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3153514 ']' 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3153514 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3153514 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3153514' 00:10:30.908 killing process with pid 3153514 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3153514 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3153514 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:30.908 00:10:30.908 real 0m26.732s 00:10:30.908 user 1m23.038s 00:10:30.908 sys 0m5.408s 00:10:30.908 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.909 ************************************ 00:10:30.909 END TEST nvmf_connect_disconnect 00:10:30.909 ************************************ 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:30.909 ************************************ 00:10:30.909 START TEST nvmf_multitarget 00:10:30.909 ************************************ 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:30.909 * Looking for test storage... 00:10:30.909 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:30.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.909 --rc genhtml_branch_coverage=1 00:10:30.909 --rc genhtml_function_coverage=1 00:10:30.909 --rc genhtml_legend=1 00:10:30.909 --rc geninfo_all_blocks=1 00:10:30.909 --rc geninfo_unexecuted_blocks=1 00:10:30.909 00:10:30.909 ' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:30.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.909 --rc genhtml_branch_coverage=1 00:10:30.909 --rc genhtml_function_coverage=1 00:10:30.909 --rc genhtml_legend=1 00:10:30.909 --rc geninfo_all_blocks=1 00:10:30.909 --rc geninfo_unexecuted_blocks=1 00:10:30.909 00:10:30.909 ' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:30.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.909 --rc genhtml_branch_coverage=1 00:10:30.909 --rc genhtml_function_coverage=1 00:10:30.909 --rc genhtml_legend=1 00:10:30.909 --rc geninfo_all_blocks=1 00:10:30.909 --rc geninfo_unexecuted_blocks=1 00:10:30.909 00:10:30.909 ' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:30.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.909 --rc genhtml_branch_coverage=1 00:10:30.909 --rc genhtml_function_coverage=1 00:10:30.909 --rc genhtml_legend=1 00:10:30.909 --rc geninfo_all_blocks=1 00:10:30.909 --rc geninfo_unexecuted_blocks=1 00:10:30.909 00:10:30.909 ' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.909 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.910 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.910 11:49:38 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.480 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:37.481 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:37.481 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:37.481 Found net devices under 0000:da:00.0: mlx_0_0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:37.481 Found net devices under 0000:da:00.1: mlx_0_1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:37.481 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:37.481 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:37.481 altname enp218s0f0np0 00:10:37.481 altname ens818f0np0 00:10:37.481 inet 192.168.100.8/24 scope global mlx_0_0 00:10:37.481 valid_lft forever preferred_lft forever 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:37.481 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:37.481 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:37.481 altname enp218s0f1np1 00:10:37.481 altname ens818f1np1 00:10:37.481 inet 192.168.100.9/24 scope global mlx_0_1 00:10:37.481 valid_lft forever preferred_lft forever 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:37.481 192.168.100.9' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:37.481 192.168.100.9' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:37.481 192.168.100.9' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3160149 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3160149 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3160149 ']' 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.481 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.482 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.482 11:49:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:37.482 [2024-12-09 11:49:44.920628] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:10:37.482 [2024-12-09 11:49:44.920677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.482 [2024-12-09 11:49:44.997616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.482 [2024-12-09 11:49:45.039671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.482 [2024-12-09 11:49:45.039707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.482 [2024-12-09 11:49:45.039714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.482 [2024-12-09 11:49:45.039721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.482 [2024-12-09 11:49:45.039726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.482 [2024-12-09 11:49:45.041239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.482 [2024-12-09 11:49:45.041350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.482 [2024-12-09 11:49:45.041457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.482 [2024-12-09 11:49:45.041457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:37.482 "nvmf_tgt_1" 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:37.482 "nvmf_tgt_2" 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:37.482 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:37.739 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:37.739 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:37.739 true 00:10:37.739 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:37.997 true 00:10:37.997 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.998 11:49:45 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:37.998 rmmod nvme_rdma 00:10:37.998 rmmod nvme_fabrics 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3160149 ']' 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3160149 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3160149 ']' 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3160149 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.998 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160149 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160149' 00:10:38.257 killing process with pid 3160149 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3160149 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3160149 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:38.257 00:10:38.257 real 0m7.491s 00:10:38.257 user 0m7.336s 00:10:38.257 sys 0m4.845s 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:38.257 ************************************ 00:10:38.257 END TEST nvmf_multitarget 00:10:38.257 ************************************ 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.257 ************************************ 00:10:38.257 START TEST nvmf_rpc 00:10:38.257 ************************************ 00:10:38.257 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:38.516 * Looking for test storage... 00:10:38.516 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:38.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.516 --rc genhtml_branch_coverage=1 00:10:38.516 --rc genhtml_function_coverage=1 00:10:38.516 --rc genhtml_legend=1 00:10:38.516 --rc geninfo_all_blocks=1 00:10:38.516 --rc geninfo_unexecuted_blocks=1 00:10:38.516 00:10:38.516 ' 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:38.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.516 --rc genhtml_branch_coverage=1 00:10:38.516 --rc genhtml_function_coverage=1 00:10:38.516 --rc genhtml_legend=1 00:10:38.516 --rc geninfo_all_blocks=1 00:10:38.516 --rc geninfo_unexecuted_blocks=1 00:10:38.516 00:10:38.516 ' 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:38.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.516 --rc genhtml_branch_coverage=1 00:10:38.516 --rc genhtml_function_coverage=1 00:10:38.516 --rc genhtml_legend=1 00:10:38.516 --rc geninfo_all_blocks=1 00:10:38.516 --rc geninfo_unexecuted_blocks=1 00:10:38.516 00:10:38.516 ' 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:38.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.516 --rc genhtml_branch_coverage=1 00:10:38.516 --rc genhtml_function_coverage=1 00:10:38.516 --rc genhtml_legend=1 00:10:38.516 --rc geninfo_all_blocks=1 00:10:38.516 --rc geninfo_unexecuted_blocks=1 00:10:38.516 00:10:38.516 ' 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.516 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.517 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.517 11:49:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.089 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:45.090 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:45.090 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:45.090 Found net devices under 0000:da:00.0: mlx_0_0 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:45.090 Found net devices under 0000:da:00.1: mlx_0_1 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:45.090 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:45.090 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:45.090 altname enp218s0f0np0 00:10:45.090 altname ens818f0np0 00:10:45.090 inet 192.168.100.8/24 scope global mlx_0_0 00:10:45.090 valid_lft forever preferred_lft forever 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:45.090 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:45.090 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:45.090 altname enp218s0f1np1 00:10:45.090 altname ens818f1np1 00:10:45.090 inet 192.168.100.9/24 scope global mlx_0_1 00:10:45.090 valid_lft forever preferred_lft forever 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:45.090 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:45.091 192.168.100.9' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:45.091 192.168.100.9' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:45.091 192.168.100.9' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3163501 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3163501 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3163501 ']' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.091 [2024-12-09 11:49:52.418864] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:10:45.091 [2024-12-09 11:49:52.418914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.091 [2024-12-09 11:49:52.493876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.091 [2024-12-09 11:49:52.536169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.091 [2024-12-09 11:49:52.536205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.091 [2024-12-09 11:49:52.536212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.091 [2024-12-09 11:49:52.536219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.091 [2024-12-09 11:49:52.536226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.091 [2024-12-09 11:49:52.537790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.091 [2024-12-09 11:49:52.537903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.091 [2024-12-09 11:49:52.537937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.091 [2024-12-09 11:49:52.537940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:45.091 "tick_rate": 2100000000, 00:10:45.091 "poll_groups": [ 00:10:45.091 { 00:10:45.091 "name": "nvmf_tgt_poll_group_000", 00:10:45.091 "admin_qpairs": 0, 00:10:45.091 "io_qpairs": 0, 00:10:45.091 "current_admin_qpairs": 0, 00:10:45.091 "current_io_qpairs": 0, 00:10:45.091 "pending_bdev_io": 0, 00:10:45.091 "completed_nvme_io": 0, 00:10:45.091 "transports": [] 00:10:45.091 }, 00:10:45.091 { 00:10:45.091 "name": "nvmf_tgt_poll_group_001", 00:10:45.091 "admin_qpairs": 0, 00:10:45.091 "io_qpairs": 0, 00:10:45.091 "current_admin_qpairs": 0, 00:10:45.091 "current_io_qpairs": 0, 00:10:45.091 "pending_bdev_io": 0, 00:10:45.091 "completed_nvme_io": 0, 00:10:45.091 "transports": [] 00:10:45.091 }, 00:10:45.091 { 00:10:45.091 "name": "nvmf_tgt_poll_group_002", 00:10:45.091 "admin_qpairs": 0, 00:10:45.091 "io_qpairs": 0, 00:10:45.091 "current_admin_qpairs": 0, 00:10:45.091 "current_io_qpairs": 0, 00:10:45.091 "pending_bdev_io": 0, 00:10:45.091 "completed_nvme_io": 0, 00:10:45.091 "transports": [] 00:10:45.091 }, 00:10:45.091 { 00:10:45.091 "name": "nvmf_tgt_poll_group_003", 00:10:45.091 "admin_qpairs": 0, 00:10:45.091 "io_qpairs": 0, 00:10:45.091 "current_admin_qpairs": 0, 00:10:45.091 "current_io_qpairs": 0, 00:10:45.091 "pending_bdev_io": 0, 00:10:45.091 "completed_nvme_io": 0, 00:10:45.091 "transports": [] 00:10:45.091 } 00:10:45.091 ] 00:10:45.091 }' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.091 [2024-12-09 11:49:52.815916] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdea9a0/0xdeee90) succeed. 00:10:45.091 [2024-12-09 11:49:52.827371] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdec030/0xe30530) succeed. 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:45.091 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.092 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.092 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.092 11:49:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:45.092 "tick_rate": 2100000000, 00:10:45.092 "poll_groups": [ 00:10:45.092 { 00:10:45.092 "name": "nvmf_tgt_poll_group_000", 00:10:45.092 "admin_qpairs": 0, 00:10:45.092 "io_qpairs": 0, 00:10:45.092 "current_admin_qpairs": 0, 00:10:45.092 "current_io_qpairs": 0, 00:10:45.092 "pending_bdev_io": 0, 00:10:45.092 "completed_nvme_io": 0, 00:10:45.092 "transports": [ 00:10:45.092 { 00:10:45.092 "trtype": "RDMA", 00:10:45.092 "pending_data_buffer": 0, 00:10:45.092 "devices": [ 00:10:45.092 { 00:10:45.092 "name": "mlx5_0", 00:10:45.092 "polls": 15241, 00:10:45.092 "idle_polls": 15241, 00:10:45.092 "completions": 0, 00:10:45.092 "requests": 0, 00:10:45.092 "request_latency": 0, 00:10:45.092 "pending_free_request": 0, 00:10:45.092 "pending_rdma_read": 0, 00:10:45.092 "pending_rdma_write": 0, 00:10:45.092 "pending_rdma_send": 0, 00:10:45.092 "total_send_wrs": 0, 00:10:45.092 "send_doorbell_updates": 0, 00:10:45.092 "total_recv_wrs": 4096, 00:10:45.092 "recv_doorbell_updates": 1 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "mlx5_1", 00:10:45.092 "polls": 15241, 00:10:45.092 "idle_polls": 15241, 00:10:45.092 "completions": 0, 00:10:45.092 "requests": 0, 00:10:45.092 "request_latency": 0, 00:10:45.092 "pending_free_request": 0, 00:10:45.092 "pending_rdma_read": 0, 00:10:45.092 "pending_rdma_write": 0, 00:10:45.092 "pending_rdma_send": 0, 00:10:45.092 "total_send_wrs": 0, 00:10:45.092 "send_doorbell_updates": 0, 00:10:45.092 "total_recv_wrs": 4096, 00:10:45.092 "recv_doorbell_updates": 1 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "nvmf_tgt_poll_group_001", 00:10:45.092 "admin_qpairs": 0, 00:10:45.092 "io_qpairs": 0, 00:10:45.092 "current_admin_qpairs": 0, 00:10:45.092 "current_io_qpairs": 0, 00:10:45.092 "pending_bdev_io": 0, 00:10:45.092 "completed_nvme_io": 0, 00:10:45.092 "transports": [ 00:10:45.092 { 00:10:45.092 "trtype": "RDMA", 00:10:45.092 "pending_data_buffer": 0, 00:10:45.092 "devices": [ 00:10:45.092 { 00:10:45.092 "name": "mlx5_0", 00:10:45.092 "polls": 9857, 00:10:45.092 "idle_polls": 9857, 00:10:45.092 "completions": 0, 00:10:45.092 "requests": 0, 00:10:45.092 "request_latency": 0, 00:10:45.092 "pending_free_request": 0, 00:10:45.092 "pending_rdma_read": 0, 00:10:45.092 "pending_rdma_write": 0, 00:10:45.092 "pending_rdma_send": 0, 00:10:45.092 "total_send_wrs": 0, 00:10:45.092 "send_doorbell_updates": 0, 00:10:45.092 "total_recv_wrs": 4096, 00:10:45.092 "recv_doorbell_updates": 1 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "mlx5_1", 00:10:45.092 "polls": 9857, 00:10:45.092 "idle_polls": 9857, 00:10:45.092 "completions": 0, 00:10:45.092 "requests": 0, 00:10:45.092 "request_latency": 0, 00:10:45.092 "pending_free_request": 0, 00:10:45.092 "pending_rdma_read": 0, 00:10:45.092 "pending_rdma_write": 0, 00:10:45.092 "pending_rdma_send": 0, 00:10:45.092 "total_send_wrs": 0, 00:10:45.092 "send_doorbell_updates": 0, 00:10:45.092 "total_recv_wrs": 4096, 00:10:45.092 "recv_doorbell_updates": 1 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "nvmf_tgt_poll_group_002", 00:10:45.092 "admin_qpairs": 0, 00:10:45.092 "io_qpairs": 0, 00:10:45.092 "current_admin_qpairs": 0, 00:10:45.092 "current_io_qpairs": 0, 00:10:45.092 "pending_bdev_io": 0, 00:10:45.092 "completed_nvme_io": 0, 00:10:45.092 "transports": [ 00:10:45.092 { 00:10:45.092 "trtype": "RDMA", 00:10:45.092 "pending_data_buffer": 0, 00:10:45.092 "devices": [ 00:10:45.092 { 00:10:45.092 "name": "mlx5_0", 00:10:45.092 "polls": 5268, 00:10:45.092 "idle_polls": 5268, 00:10:45.092 "completions": 0, 00:10:45.092 "requests": 0, 00:10:45.092 "request_latency": 0, 00:10:45.092 "pending_free_request": 0, 00:10:45.092 "pending_rdma_read": 0, 00:10:45.092 "pending_rdma_write": 0, 00:10:45.092 "pending_rdma_send": 0, 00:10:45.092 "total_send_wrs": 0, 00:10:45.092 "send_doorbell_updates": 0, 00:10:45.092 "total_recv_wrs": 4096, 00:10:45.092 "recv_doorbell_updates": 1 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "mlx5_1", 00:10:45.092 "polls": 5268, 00:10:45.092 "idle_polls": 5268, 00:10:45.092 "completions": 0, 00:10:45.092 "requests": 0, 00:10:45.092 "request_latency": 0, 00:10:45.092 "pending_free_request": 0, 00:10:45.092 "pending_rdma_read": 0, 00:10:45.092 "pending_rdma_write": 0, 00:10:45.092 "pending_rdma_send": 0, 00:10:45.092 "total_send_wrs": 0, 00:10:45.092 "send_doorbell_updates": 0, 00:10:45.092 "total_recv_wrs": 4096, 00:10:45.092 "recv_doorbell_updates": 1 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "nvmf_tgt_poll_group_003", 00:10:45.092 "admin_qpairs": 0, 00:10:45.092 "io_qpairs": 0, 00:10:45.092 "current_admin_qpairs": 0, 00:10:45.092 "current_io_qpairs": 0, 00:10:45.092 "pending_bdev_io": 0, 00:10:45.092 "completed_nvme_io": 0, 00:10:45.092 "transports": [ 00:10:45.092 { 00:10:45.092 "trtype": "RDMA", 00:10:45.092 "pending_data_buffer": 0, 00:10:45.092 "devices": [ 00:10:45.092 { 00:10:45.092 "name": "mlx5_0", 00:10:45.092 "polls": 874, 00:10:45.092 "idle_polls": 874, 00:10:45.092 "completions": 0, 00:10:45.092 "requests": 0, 00:10:45.092 "request_latency": 0, 00:10:45.092 "pending_free_request": 0, 00:10:45.092 "pending_rdma_read": 0, 00:10:45.092 "pending_rdma_write": 0, 00:10:45.092 "pending_rdma_send": 0, 00:10:45.092 "total_send_wrs": 0, 00:10:45.092 "send_doorbell_updates": 0, 00:10:45.092 "total_recv_wrs": 4096, 00:10:45.092 "recv_doorbell_updates": 1 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "mlx5_1", 00:10:45.092 "polls": 874, 00:10:45.092 "idle_polls": 874, 00:10:45.092 "completions": 0, 00:10:45.092 "requests": 0, 00:10:45.092 "request_latency": 0, 00:10:45.092 "pending_free_request": 0, 00:10:45.092 "pending_rdma_read": 0, 00:10:45.092 "pending_rdma_write": 0, 00:10:45.092 "pending_rdma_send": 0, 00:10:45.092 "total_send_wrs": 0, 00:10:45.092 "send_doorbell_updates": 0, 00:10:45.092 "total_recv_wrs": 4096, 00:10:45.092 "recv_doorbell_updates": 1 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 }' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:10:45.092 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 Malloc1 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 [2024-12-09 11:49:53.278842] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:10:45.353 [2024-12-09 11:49:53.325014] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:10:45.353 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:45.353 could not add new controller: failed to write to nvme-fabrics device 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.353 11:49:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:46.732 11:49:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.732 11:49:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:46.732 11:49:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.732 11:49:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:46.732 11:49:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:48.636 11:49:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:48.636 11:49:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:48.636 11:49:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.636 11:49:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:48.636 11:49:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.636 11:49:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:48.636 11:49:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:49.572 [2024-12-09 11:49:57.426671] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:10:49.572 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:49.572 could not add new controller: failed to write to nvme-fabrics device 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.572 11:49:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:50.509 11:49:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.509 11:49:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:50.509 11:49:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.509 11:49:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:50.509 11:49:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:52.413 11:50:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:52.413 11:50:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:52.413 11:50:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.672 11:50:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:52.672 11:50:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.672 11:50:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:52.672 11:50:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.609 [2024-12-09 11:50:01.501238] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.609 11:50:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:54.546 11:50:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.546 11:50:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:54.546 11:50:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.546 11:50:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:54.546 11:50:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.079 11:50:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.080 11:50:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.080 11:50:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.080 11:50:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:57.080 11:50:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.080 11:50:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:57.080 11:50:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.647 [2024-12-09 11:50:05.569338] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.647 11:50:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:58.584 11:50:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.584 11:50:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:58.584 11:50:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.584 11:50:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:58.584 11:50:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:01.118 11:50:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:01.118 11:50:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:01.118 11:50:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.118 11:50:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:01.118 11:50:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.118 11:50:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:01.118 11:50:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.686 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.687 [2024-12-09 11:50:09.624921] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.687 11:50:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:02.623 11:50:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.623 11:50:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.623 11:50:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.623 11:50:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:02.623 11:50:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:05.158 11:50:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:05.158 11:50:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:05.158 11:50:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.158 11:50:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:05.158 11:50:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.158 11:50:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:05.158 11:50:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.725 [2024-12-09 11:50:13.677985] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.725 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.726 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.726 11:50:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:06.661 11:50:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.661 11:50:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.661 11:50:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.661 11:50:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:06.661 11:50:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:09.196 11:50:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:09.196 11:50:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:09.196 11:50:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.196 11:50:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.196 11:50:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.196 11:50:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:09.196 11:50:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.763 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.763 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:09.763 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:09.763 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.764 [2024-12-09 11:50:17.744569] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.764 11:50:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:10.700 11:50:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.700 11:50:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:10.700 11:50:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.700 11:50:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:10.700 11:50:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:13.231 11:50:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:13.231 11:50:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:13.231 11:50:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.231 11:50:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:13.231 11:50:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.231 11:50:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:13.232 11:50:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.799 [2024-12-09 11:50:21.772157] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.799 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.800 [2024-12-09 11:50:21.820806] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.800 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 [2024-12-09 11:50:21.869043] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 [2024-12-09 11:50:21.917169] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 [2024-12-09 11:50:21.965286] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:14.059 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.059 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.059 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:14.059 "tick_rate": 2100000000, 00:11:14.059 "poll_groups": [ 00:11:14.059 { 00:11:14.059 "name": "nvmf_tgt_poll_group_000", 00:11:14.059 "admin_qpairs": 2, 00:11:14.059 "io_qpairs": 27, 00:11:14.059 "current_admin_qpairs": 0, 00:11:14.059 "current_io_qpairs": 0, 00:11:14.059 "pending_bdev_io": 0, 00:11:14.059 "completed_nvme_io": 77, 00:11:14.059 "transports": [ 00:11:14.059 { 00:11:14.059 "trtype": "RDMA", 00:11:14.059 "pending_data_buffer": 0, 00:11:14.059 "devices": [ 00:11:14.059 { 00:11:14.059 "name": "mlx5_0", 00:11:14.059 "polls": 3506418, 00:11:14.059 "idle_polls": 3506176, 00:11:14.059 "completions": 263, 00:11:14.059 "requests": 131, 00:11:14.059 "request_latency": 19715010, 00:11:14.059 "pending_free_request": 0, 00:11:14.059 "pending_rdma_read": 0, 00:11:14.059 "pending_rdma_write": 0, 00:11:14.059 "pending_rdma_send": 0, 00:11:14.059 "total_send_wrs": 207, 00:11:14.059 "send_doorbell_updates": 121, 00:11:14.059 "total_recv_wrs": 4227, 00:11:14.059 "recv_doorbell_updates": 121 00:11:14.059 }, 00:11:14.059 { 00:11:14.059 "name": "mlx5_1", 00:11:14.060 "polls": 3506418, 00:11:14.060 "idle_polls": 3506418, 00:11:14.060 "completions": 0, 00:11:14.060 "requests": 0, 00:11:14.060 "request_latency": 0, 00:11:14.060 "pending_free_request": 0, 00:11:14.060 "pending_rdma_read": 0, 00:11:14.060 "pending_rdma_write": 0, 00:11:14.060 "pending_rdma_send": 0, 00:11:14.060 "total_send_wrs": 0, 00:11:14.060 "send_doorbell_updates": 0, 00:11:14.060 "total_recv_wrs": 4096, 00:11:14.060 "recv_doorbell_updates": 1 00:11:14.060 } 00:11:14.060 ] 00:11:14.060 } 00:11:14.060 ] 00:11:14.060 }, 00:11:14.060 { 00:11:14.060 "name": "nvmf_tgt_poll_group_001", 00:11:14.060 "admin_qpairs": 2, 00:11:14.060 "io_qpairs": 26, 00:11:14.060 "current_admin_qpairs": 0, 00:11:14.060 "current_io_qpairs": 0, 00:11:14.060 "pending_bdev_io": 0, 00:11:14.060 "completed_nvme_io": 126, 00:11:14.060 "transports": [ 00:11:14.060 { 00:11:14.060 "trtype": "RDMA", 00:11:14.060 "pending_data_buffer": 0, 00:11:14.060 "devices": [ 00:11:14.060 { 00:11:14.060 "name": "mlx5_0", 00:11:14.060 "polls": 3475062, 00:11:14.060 "idle_polls": 3474740, 00:11:14.060 "completions": 360, 00:11:14.060 "requests": 180, 00:11:14.060 "request_latency": 32060238, 00:11:14.060 "pending_free_request": 0, 00:11:14.060 "pending_rdma_read": 0, 00:11:14.060 "pending_rdma_write": 0, 00:11:14.060 "pending_rdma_send": 0, 00:11:14.060 "total_send_wrs": 306, 00:11:14.060 "send_doorbell_updates": 157, 00:11:14.060 "total_recv_wrs": 4276, 00:11:14.060 "recv_doorbell_updates": 158 00:11:14.060 }, 00:11:14.060 { 00:11:14.060 "name": "mlx5_1", 00:11:14.060 "polls": 3475062, 00:11:14.060 "idle_polls": 3475062, 00:11:14.060 "completions": 0, 00:11:14.060 "requests": 0, 00:11:14.060 "request_latency": 0, 00:11:14.060 "pending_free_request": 0, 00:11:14.060 "pending_rdma_read": 0, 00:11:14.060 "pending_rdma_write": 0, 00:11:14.060 "pending_rdma_send": 0, 00:11:14.060 "total_send_wrs": 0, 00:11:14.060 "send_doorbell_updates": 0, 00:11:14.060 "total_recv_wrs": 4096, 00:11:14.060 "recv_doorbell_updates": 1 00:11:14.060 } 00:11:14.060 ] 00:11:14.060 } 00:11:14.060 ] 00:11:14.060 }, 00:11:14.060 { 00:11:14.060 "name": "nvmf_tgt_poll_group_002", 00:11:14.060 "admin_qpairs": 1, 00:11:14.060 "io_qpairs": 26, 00:11:14.060 "current_admin_qpairs": 0, 00:11:14.060 "current_io_qpairs": 0, 00:11:14.060 "pending_bdev_io": 0, 00:11:14.060 "completed_nvme_io": 78, 00:11:14.060 "transports": [ 00:11:14.060 { 00:11:14.060 "trtype": "RDMA", 00:11:14.060 "pending_data_buffer": 0, 00:11:14.060 "devices": [ 00:11:14.060 { 00:11:14.060 "name": "mlx5_0", 00:11:14.060 "polls": 3418702, 00:11:14.060 "idle_polls": 3418511, 00:11:14.060 "completions": 213, 00:11:14.060 "requests": 106, 00:11:14.060 "request_latency": 18307426, 00:11:14.060 "pending_free_request": 0, 00:11:14.060 "pending_rdma_read": 0, 00:11:14.060 "pending_rdma_write": 0, 00:11:14.060 "pending_rdma_send": 0, 00:11:14.060 "total_send_wrs": 172, 00:11:14.060 "send_doorbell_updates": 94, 00:11:14.060 "total_recv_wrs": 4202, 00:11:14.060 "recv_doorbell_updates": 94 00:11:14.060 }, 00:11:14.060 { 00:11:14.060 "name": "mlx5_1", 00:11:14.060 "polls": 3418702, 00:11:14.060 "idle_polls": 3418702, 00:11:14.060 "completions": 0, 00:11:14.060 "requests": 0, 00:11:14.060 "request_latency": 0, 00:11:14.060 "pending_free_request": 0, 00:11:14.060 "pending_rdma_read": 0, 00:11:14.060 "pending_rdma_write": 0, 00:11:14.060 "pending_rdma_send": 0, 00:11:14.060 "total_send_wrs": 0, 00:11:14.060 "send_doorbell_updates": 0, 00:11:14.060 "total_recv_wrs": 4096, 00:11:14.060 "recv_doorbell_updates": 1 00:11:14.060 } 00:11:14.060 ] 00:11:14.060 } 00:11:14.060 ] 00:11:14.060 }, 00:11:14.060 { 00:11:14.060 "name": "nvmf_tgt_poll_group_003", 00:11:14.060 "admin_qpairs": 2, 00:11:14.060 "io_qpairs": 26, 00:11:14.060 "current_admin_qpairs": 0, 00:11:14.060 "current_io_qpairs": 0, 00:11:14.060 "pending_bdev_io": 0, 00:11:14.060 "completed_nvme_io": 174, 00:11:14.060 "transports": [ 00:11:14.060 { 00:11:14.060 "trtype": "RDMA", 00:11:14.060 "pending_data_buffer": 0, 00:11:14.060 "devices": [ 00:11:14.060 { 00:11:14.060 "name": "mlx5_0", 00:11:14.060 "polls": 2740636, 00:11:14.060 "idle_polls": 2740246, 00:11:14.060 "completions": 456, 00:11:14.060 "requests": 228, 00:11:14.060 "request_latency": 46892056, 00:11:14.060 "pending_free_request": 0, 00:11:14.060 "pending_rdma_read": 0, 00:11:14.060 "pending_rdma_write": 0, 00:11:14.060 "pending_rdma_send": 0, 00:11:14.060 "total_send_wrs": 402, 00:11:14.060 "send_doorbell_updates": 190, 00:11:14.060 "total_recv_wrs": 4324, 00:11:14.060 "recv_doorbell_updates": 191 00:11:14.060 }, 00:11:14.060 { 00:11:14.060 "name": "mlx5_1", 00:11:14.060 "polls": 2740636, 00:11:14.060 "idle_polls": 2740636, 00:11:14.060 "completions": 0, 00:11:14.060 "requests": 0, 00:11:14.060 "request_latency": 0, 00:11:14.060 "pending_free_request": 0, 00:11:14.060 "pending_rdma_read": 0, 00:11:14.060 "pending_rdma_write": 0, 00:11:14.060 "pending_rdma_send": 0, 00:11:14.060 "total_send_wrs": 0, 00:11:14.060 "send_doorbell_updates": 0, 00:11:14.060 "total_recv_wrs": 4096, 00:11:14.060 "recv_doorbell_updates": 1 00:11:14.060 } 00:11:14.060 ] 00:11:14.060 } 00:11:14.060 ] 00:11:14.060 } 00:11:14.060 ] 00:11:14.060 }' 00:11:14.060 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:14.060 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:14.060 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:14.060 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.060 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:14.060 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:14.060 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:14.060 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:14.060 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1292 > 0 )) 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 116974730 > 0 )) 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:14.319 rmmod nvme_rdma 00:11:14.319 rmmod nvme_fabrics 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3163501 ']' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3163501 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3163501 ']' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3163501 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3163501 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3163501' 00:11:14.319 killing process with pid 3163501 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3163501 00:11:14.319 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3163501 00:11:14.578 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.579 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:14.579 00:11:14.579 real 0m36.325s 00:11:14.579 user 2m1.853s 00:11:14.579 sys 0m5.874s 00:11:14.579 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.579 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.579 ************************************ 00:11:14.579 END TEST nvmf_rpc 00:11:14.579 ************************************ 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:14.838 ************************************ 00:11:14.838 START TEST nvmf_invalid 00:11:14.838 ************************************ 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:14.838 * Looking for test storage... 00:11:14.838 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.838 --rc genhtml_branch_coverage=1 00:11:14.838 --rc genhtml_function_coverage=1 00:11:14.838 --rc genhtml_legend=1 00:11:14.838 --rc geninfo_all_blocks=1 00:11:14.838 --rc geninfo_unexecuted_blocks=1 00:11:14.838 00:11:14.838 ' 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.838 --rc genhtml_branch_coverage=1 00:11:14.838 --rc genhtml_function_coverage=1 00:11:14.838 --rc genhtml_legend=1 00:11:14.838 --rc geninfo_all_blocks=1 00:11:14.838 --rc geninfo_unexecuted_blocks=1 00:11:14.838 00:11:14.838 ' 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.838 --rc genhtml_branch_coverage=1 00:11:14.838 --rc genhtml_function_coverage=1 00:11:14.838 --rc genhtml_legend=1 00:11:14.838 --rc geninfo_all_blocks=1 00:11:14.838 --rc geninfo_unexecuted_blocks=1 00:11:14.838 00:11:14.838 ' 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.838 --rc genhtml_branch_coverage=1 00:11:14.838 --rc genhtml_function_coverage=1 00:11:14.838 --rc genhtml_legend=1 00:11:14.838 --rc geninfo_all_blocks=1 00:11:14.838 --rc geninfo_unexecuted_blocks=1 00:11:14.838 00:11:14.838 ' 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.838 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.839 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.839 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.098 11:50:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.671 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:21.672 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:21.672 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:21.672 Found net devices under 0000:da:00.0: mlx_0_0 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:21.672 Found net devices under 0000:da:00.1: mlx_0_1 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:21.672 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.672 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:21.672 altname enp218s0f0np0 00:11:21.672 altname ens818f0np0 00:11:21.672 inet 192.168.100.8/24 scope global mlx_0_0 00:11:21.672 valid_lft forever preferred_lft forever 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:21.672 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.672 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:21.672 altname enp218s0f1np1 00:11:21.672 altname ens818f1np1 00:11:21.672 inet 192.168.100.9/24 scope global mlx_0_1 00:11:21.672 valid_lft forever preferred_lft forever 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:21.672 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:21.673 192.168.100.9' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:21.673 192.168.100.9' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:21.673 192.168.100.9' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3171783 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3171783 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3171783 ']' 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.673 11:50:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:21.673 [2024-12-09 11:50:28.793657] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:11:21.673 [2024-12-09 11:50:28.793713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.673 [2024-12-09 11:50:28.871150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.673 [2024-12-09 11:50:28.914517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.673 [2024-12-09 11:50:28.914556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.673 [2024-12-09 11:50:28.914563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.673 [2024-12-09 11:50:28.914569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.673 [2024-12-09 11:50:28.914574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.673 [2024-12-09 11:50:28.916087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.673 [2024-12-09 11:50:28.916197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.673 [2024-12-09 11:50:28.916301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.673 [2024-12-09 11:50:28.916302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8346 00:11:21.673 [2024-12-09 11:50:29.243411] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:21.673 { 00:11:21.673 "nqn": "nqn.2016-06.io.spdk:cnode8346", 00:11:21.673 "tgt_name": "foobar", 00:11:21.673 "method": "nvmf_create_subsystem", 00:11:21.673 "req_id": 1 00:11:21.673 } 00:11:21.673 Got JSON-RPC error response 00:11:21.673 response: 00:11:21.673 { 00:11:21.673 "code": -32603, 00:11:21.673 "message": "Unable to find target foobar" 00:11:21.673 }' 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:21.673 { 00:11:21.673 "nqn": "nqn.2016-06.io.spdk:cnode8346", 00:11:21.673 "tgt_name": "foobar", 00:11:21.673 "method": "nvmf_create_subsystem", 00:11:21.673 "req_id": 1 00:11:21.673 } 00:11:21.673 Got JSON-RPC error response 00:11:21.673 response: 00:11:21.673 { 00:11:21.673 "code": -32603, 00:11:21.673 "message": "Unable to find target foobar" 00:11:21.673 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15219 00:11:21.673 [2024-12-09 11:50:29.436068] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15219: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:21.673 { 00:11:21.673 "nqn": "nqn.2016-06.io.spdk:cnode15219", 00:11:21.673 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:21.673 "method": "nvmf_create_subsystem", 00:11:21.673 "req_id": 1 00:11:21.673 } 00:11:21.673 Got JSON-RPC error response 00:11:21.673 response: 00:11:21.673 { 00:11:21.673 "code": -32602, 00:11:21.673 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:21.673 }' 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:21.673 { 00:11:21.673 "nqn": "nqn.2016-06.io.spdk:cnode15219", 00:11:21.673 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:21.673 "method": "nvmf_create_subsystem", 00:11:21.673 "req_id": 1 00:11:21.673 } 00:11:21.673 Got JSON-RPC error response 00:11:21.673 response: 00:11:21.673 { 00:11:21.673 "code": -32602, 00:11:21.673 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:21.673 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12773 00:11:21.673 [2024-12-09 11:50:29.640752] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12773: invalid model number 'SPDK_Controller' 00:11:21.673 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:21.673 { 00:11:21.673 "nqn": "nqn.2016-06.io.spdk:cnode12773", 00:11:21.673 "model_number": "SPDK_Controller\u001f", 00:11:21.673 "method": "nvmf_create_subsystem", 00:11:21.673 "req_id": 1 00:11:21.673 } 00:11:21.674 Got JSON-RPC error response 00:11:21.674 response: 00:11:21.674 { 00:11:21.674 "code": -32602, 00:11:21.674 "message": "Invalid MN SPDK_Controller\u001f" 00:11:21.674 }' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:21.674 { 00:11:21.674 "nqn": "nqn.2016-06.io.spdk:cnode12773", 00:11:21.674 "model_number": "SPDK_Controller\u001f", 00:11:21.674 "method": "nvmf_create_subsystem", 00:11:21.674 "req_id": 1 00:11:21.674 } 00:11:21.674 Got JSON-RPC error response 00:11:21.674 response: 00:11:21.674 { 00:11:21.674 "code": -32602, 00:11:21.674 "message": "Invalid MN SPDK_Controller\u001f" 00:11:21.674 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.674 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'U/"E*(nxI:'\''VD6*#>-G#F' 00:11:21.933 11:50:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'U/"E*(nxI:'\''VD6*#>-G#F' nqn.2016-06.io.spdk:cnode4003 00:11:21.933 [2024-12-09 11:50:29.977893] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4003: invalid serial number 'U/"E*(nxI:'VD6*#>-G#F' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:22.192 { 00:11:22.192 "nqn": "nqn.2016-06.io.spdk:cnode4003", 00:11:22.192 "serial_number": "U/\"E*(nxI:'\''VD6*#>-G#F", 00:11:22.192 "method": "nvmf_create_subsystem", 00:11:22.192 "req_id": 1 00:11:22.192 } 00:11:22.192 Got JSON-RPC error response 00:11:22.192 response: 00:11:22.192 { 00:11:22.192 "code": -32602, 00:11:22.192 "message": "Invalid SN U/\"E*(nxI:'\''VD6*#>-G#F" 00:11:22.192 }' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:22.192 { 00:11:22.192 "nqn": "nqn.2016-06.io.spdk:cnode4003", 00:11:22.192 "serial_number": "U/\"E*(nxI:'VD6*#>-G#F", 00:11:22.192 "method": "nvmf_create_subsystem", 00:11:22.192 "req_id": 1 00:11:22.192 } 00:11:22.192 Got JSON-RPC error response 00:11:22.192 response: 00:11:22.192 { 00:11:22.192 "code": -32602, 00:11:22.192 "message": "Invalid SN U/\"E*(nxI:'VD6*#>-G#F" 00:11:22.192 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.192 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.193 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.194 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:22.452 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '0M@uU*?WU"b2THAp~3/6&.cW^oC(>\>|XTi<9[YY>' 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '0M@uU*?WU"b2THAp~3/6&.cW^oC(>\>|XTi<9[YY>' nqn.2016-06.io.spdk:cnode21560 00:11:22.453 [2024-12-09 11:50:30.459472] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21560: invalid model number '0M@uU*?WU"b2THAp~3/6&.cW^oC(>\>|XTi<9[YY>' 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:22.453 { 00:11:22.453 "nqn": "nqn.2016-06.io.spdk:cnode21560", 00:11:22.453 "model_number": "0M@uU*?WU\"b2THAp~3/6&.cW^oC(>\\>|XTi<9[YY>", 00:11:22.453 "method": "nvmf_create_subsystem", 00:11:22.453 "req_id": 1 00:11:22.453 } 00:11:22.453 Got JSON-RPC error response 00:11:22.453 response: 00:11:22.453 { 00:11:22.453 "code": -32602, 00:11:22.453 "message": "Invalid MN 0M@uU*?WU\"b2THAp~3/6&.cW^oC(>\\>|XTi<9[YY>" 00:11:22.453 }' 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:22.453 { 00:11:22.453 "nqn": "nqn.2016-06.io.spdk:cnode21560", 00:11:22.453 "model_number": "0M@uU*?WU\"b2THAp~3/6&.cW^oC(>\\>|XTi<9[YY>", 00:11:22.453 "method": "nvmf_create_subsystem", 00:11:22.453 "req_id": 1 00:11:22.453 } 00:11:22.453 Got JSON-RPC error response 00:11:22.453 response: 00:11:22.453 { 00:11:22.453 "code": -32602, 00:11:22.453 "message": "Invalid MN 0M@uU*?WU\"b2THAp~3/6&.cW^oC(>\\>|XTi<9[YY>" 00:11:22.453 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:22.453 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:11:22.712 [2024-12-09 11:50:30.690966] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xab0260/0xab4750) succeed. 00:11:22.712 [2024-12-09 11:50:30.702228] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xab18f0/0xaf5df0) succeed. 00:11:22.971 11:50:30 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:23.229 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:11:23.229 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:11:23.229 192.168.100.9' 00:11:23.229 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:23.230 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:11:23.230 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:11:23.230 [2024-12-09 11:50:31.236328] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:23.230 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:23.230 { 00:11:23.230 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:23.230 "listen_address": { 00:11:23.230 "trtype": "rdma", 00:11:23.230 "traddr": "192.168.100.8", 00:11:23.230 "trsvcid": "4421" 00:11:23.230 }, 00:11:23.230 "method": "nvmf_subsystem_remove_listener", 00:11:23.230 "req_id": 1 00:11:23.230 } 00:11:23.230 Got JSON-RPC error response 00:11:23.230 response: 00:11:23.230 { 00:11:23.230 "code": -32602, 00:11:23.230 "message": "Invalid parameters" 00:11:23.230 }' 00:11:23.230 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:23.230 { 00:11:23.230 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:23.230 "listen_address": { 00:11:23.230 "trtype": "rdma", 00:11:23.230 "traddr": "192.168.100.8", 00:11:23.230 "trsvcid": "4421" 00:11:23.230 }, 00:11:23.230 "method": "nvmf_subsystem_remove_listener", 00:11:23.230 "req_id": 1 00:11:23.230 } 00:11:23.230 Got JSON-RPC error response 00:11:23.230 response: 00:11:23.230 { 00:11:23.230 "code": -32602, 00:11:23.230 "message": "Invalid parameters" 00:11:23.230 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:23.230 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17852 -i 0 00:11:23.488 [2024-12-09 11:50:31.437042] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17852: invalid cntlid range [0-65519] 00:11:23.488 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:23.488 { 00:11:23.488 "nqn": "nqn.2016-06.io.spdk:cnode17852", 00:11:23.488 "min_cntlid": 0, 00:11:23.488 "method": "nvmf_create_subsystem", 00:11:23.488 "req_id": 1 00:11:23.488 } 00:11:23.488 Got JSON-RPC error response 00:11:23.488 response: 00:11:23.488 { 00:11:23.488 "code": -32602, 00:11:23.488 "message": "Invalid cntlid range [0-65519]" 00:11:23.488 }' 00:11:23.488 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:23.488 { 00:11:23.488 "nqn": "nqn.2016-06.io.spdk:cnode17852", 00:11:23.488 "min_cntlid": 0, 00:11:23.488 "method": "nvmf_create_subsystem", 00:11:23.488 "req_id": 1 00:11:23.488 } 00:11:23.488 Got JSON-RPC error response 00:11:23.488 response: 00:11:23.488 { 00:11:23.488 "code": -32602, 00:11:23.488 "message": "Invalid cntlid range [0-65519]" 00:11:23.488 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:23.488 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20112 -i 65520 00:11:23.747 [2024-12-09 11:50:31.633764] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20112: invalid cntlid range [65520-65519] 00:11:23.747 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:23.747 { 00:11:23.747 "nqn": "nqn.2016-06.io.spdk:cnode20112", 00:11:23.747 "min_cntlid": 65520, 00:11:23.747 "method": "nvmf_create_subsystem", 00:11:23.747 "req_id": 1 00:11:23.747 } 00:11:23.747 Got JSON-RPC error response 00:11:23.747 response: 00:11:23.747 { 00:11:23.747 "code": -32602, 00:11:23.747 "message": "Invalid cntlid range [65520-65519]" 00:11:23.747 }' 00:11:23.747 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:23.747 { 00:11:23.747 "nqn": "nqn.2016-06.io.spdk:cnode20112", 00:11:23.747 "min_cntlid": 65520, 00:11:23.747 "method": "nvmf_create_subsystem", 00:11:23.747 "req_id": 1 00:11:23.747 } 00:11:23.747 Got JSON-RPC error response 00:11:23.747 response: 00:11:23.747 { 00:11:23.747 "code": -32602, 00:11:23.747 "message": "Invalid cntlid range [65520-65519]" 00:11:23.747 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:23.747 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21872 -I 0 00:11:24.006 [2024-12-09 11:50:31.850559] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21872: invalid cntlid range [1-0] 00:11:24.006 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:24.006 { 00:11:24.006 "nqn": "nqn.2016-06.io.spdk:cnode21872", 00:11:24.006 "max_cntlid": 0, 00:11:24.006 "method": "nvmf_create_subsystem", 00:11:24.006 "req_id": 1 00:11:24.006 } 00:11:24.006 Got JSON-RPC error response 00:11:24.006 response: 00:11:24.006 { 00:11:24.006 "code": -32602, 00:11:24.006 "message": "Invalid cntlid range [1-0]" 00:11:24.006 }' 00:11:24.006 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:24.006 { 00:11:24.006 "nqn": "nqn.2016-06.io.spdk:cnode21872", 00:11:24.006 "max_cntlid": 0, 00:11:24.006 "method": "nvmf_create_subsystem", 00:11:24.006 "req_id": 1 00:11:24.006 } 00:11:24.006 Got JSON-RPC error response 00:11:24.006 response: 00:11:24.006 { 00:11:24.006 "code": -32602, 00:11:24.006 "message": "Invalid cntlid range [1-0]" 00:11:24.006 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.006 11:50:31 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11885 -I 65520 00:11:24.006 [2024-12-09 11:50:32.059330] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11885: invalid cntlid range [1-65520] 00:11:24.264 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:24.264 { 00:11:24.264 "nqn": "nqn.2016-06.io.spdk:cnode11885", 00:11:24.264 "max_cntlid": 65520, 00:11:24.264 "method": "nvmf_create_subsystem", 00:11:24.264 "req_id": 1 00:11:24.264 } 00:11:24.264 Got JSON-RPC error response 00:11:24.264 response: 00:11:24.264 { 00:11:24.264 "code": -32602, 00:11:24.264 "message": "Invalid cntlid range [1-65520]" 00:11:24.264 }' 00:11:24.264 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:24.265 { 00:11:24.265 "nqn": "nqn.2016-06.io.spdk:cnode11885", 00:11:24.265 "max_cntlid": 65520, 00:11:24.265 "method": "nvmf_create_subsystem", 00:11:24.265 "req_id": 1 00:11:24.265 } 00:11:24.265 Got JSON-RPC error response 00:11:24.265 response: 00:11:24.265 { 00:11:24.265 "code": -32602, 00:11:24.265 "message": "Invalid cntlid range [1-65520]" 00:11:24.265 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.265 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15906 -i 6 -I 5 00:11:24.265 [2024-12-09 11:50:32.256044] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15906: invalid cntlid range [6-5] 00:11:24.265 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:24.265 { 00:11:24.265 "nqn": "nqn.2016-06.io.spdk:cnode15906", 00:11:24.265 "min_cntlid": 6, 00:11:24.265 "max_cntlid": 5, 00:11:24.265 "method": "nvmf_create_subsystem", 00:11:24.265 "req_id": 1 00:11:24.265 } 00:11:24.265 Got JSON-RPC error response 00:11:24.265 response: 00:11:24.265 { 00:11:24.265 "code": -32602, 00:11:24.265 "message": "Invalid cntlid range [6-5]" 00:11:24.265 }' 00:11:24.265 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:24.265 { 00:11:24.265 "nqn": "nqn.2016-06.io.spdk:cnode15906", 00:11:24.265 "min_cntlid": 6, 00:11:24.265 "max_cntlid": 5, 00:11:24.265 "method": "nvmf_create_subsystem", 00:11:24.265 "req_id": 1 00:11:24.265 } 00:11:24.265 Got JSON-RPC error response 00:11:24.265 response: 00:11:24.265 { 00:11:24.265 "code": -32602, 00:11:24.265 "message": "Invalid cntlid range [6-5]" 00:11:24.265 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.265 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:24.524 { 00:11:24.524 "name": "foobar", 00:11:24.524 "method": "nvmf_delete_target", 00:11:24.524 "req_id": 1 00:11:24.524 } 00:11:24.524 Got JSON-RPC error response 00:11:24.524 response: 00:11:24.524 { 00:11:24.524 "code": -32602, 00:11:24.524 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:24.524 }' 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:24.524 { 00:11:24.524 "name": "foobar", 00:11:24.524 "method": "nvmf_delete_target", 00:11:24.524 "req_id": 1 00:11:24.524 } 00:11:24.524 Got JSON-RPC error response 00:11:24.524 response: 00:11:24.524 { 00:11:24.524 "code": -32602, 00:11:24.524 "message": "The specified target doesn't exist, cannot delete it." 00:11:24.524 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:24.524 rmmod nvme_rdma 00:11:24.524 rmmod nvme_fabrics 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3171783 ']' 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3171783 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3171783 ']' 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3171783 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3171783 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3171783' 00:11:24.524 killing process with pid 3171783 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3171783 00:11:24.524 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3171783 00:11:24.783 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.783 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:24.783 00:11:24.783 real 0m10.062s 00:11:24.783 user 0m19.454s 00:11:24.783 sys 0m5.319s 00:11:24.784 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.784 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:24.784 ************************************ 00:11:24.784 END TEST nvmf_invalid 00:11:24.784 ************************************ 00:11:24.784 11:50:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:24.784 11:50:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.784 11:50:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.784 11:50:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.784 ************************************ 00:11:24.784 START TEST nvmf_connect_stress 00:11:24.784 ************************************ 00:11:24.784 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:25.043 * Looking for test storage... 00:11:25.043 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:25.043 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:25.043 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:25.043 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:25.043 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:25.043 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.043 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.043 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:25.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.044 --rc genhtml_branch_coverage=1 00:11:25.044 --rc genhtml_function_coverage=1 00:11:25.044 --rc genhtml_legend=1 00:11:25.044 --rc geninfo_all_blocks=1 00:11:25.044 --rc geninfo_unexecuted_blocks=1 00:11:25.044 00:11:25.044 ' 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:25.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.044 --rc genhtml_branch_coverage=1 00:11:25.044 --rc genhtml_function_coverage=1 00:11:25.044 --rc genhtml_legend=1 00:11:25.044 --rc geninfo_all_blocks=1 00:11:25.044 --rc geninfo_unexecuted_blocks=1 00:11:25.044 00:11:25.044 ' 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:25.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.044 --rc genhtml_branch_coverage=1 00:11:25.044 --rc genhtml_function_coverage=1 00:11:25.044 --rc genhtml_legend=1 00:11:25.044 --rc geninfo_all_blocks=1 00:11:25.044 --rc geninfo_unexecuted_blocks=1 00:11:25.044 00:11:25.044 ' 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:25.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.044 --rc genhtml_branch_coverage=1 00:11:25.044 --rc genhtml_function_coverage=1 00:11:25.044 --rc genhtml_legend=1 00:11:25.044 --rc geninfo_all_blocks=1 00:11:25.044 --rc geninfo_unexecuted_blocks=1 00:11:25.044 00:11:25.044 ' 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:25.044 11:50:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.044 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:25.044 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:25.045 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.045 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.045 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.045 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:25.045 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:25.045 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:25.045 11:50:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.615 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:31.616 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:31.616 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:31.616 Found net devices under 0000:da:00.0: mlx_0_0 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:31.616 Found net devices under 0000:da:00.1: mlx_0_1 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:31.616 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:31.616 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:31.616 altname enp218s0f0np0 00:11:31.616 altname ens818f0np0 00:11:31.616 inet 192.168.100.8/24 scope global mlx_0_0 00:11:31.616 valid_lft forever preferred_lft forever 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:31.616 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:31.616 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:31.616 altname enp218s0f1np1 00:11:31.616 altname ens818f1np1 00:11:31.616 inet 192.168.100.9/24 scope global mlx_0_1 00:11:31.616 valid_lft forever preferred_lft forever 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.616 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:31.617 192.168.100.9' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:31.617 192.168.100.9' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:31.617 192.168.100.9' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3175830 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3175830 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3175830 ']' 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.617 11:50:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.617 [2024-12-09 11:50:38.952755] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:11:31.617 [2024-12-09 11:50:38.952802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.617 [2024-12-09 11:50:39.029097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:31.617 [2024-12-09 11:50:39.071838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.617 [2024-12-09 11:50:39.071874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.617 [2024-12-09 11:50:39.071884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.617 [2024-12-09 11:50:39.071890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.617 [2024-12-09 11:50:39.071895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.617 [2024-12-09 11:50:39.073360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.617 [2024-12-09 11:50:39.073467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.617 [2024-12-09 11:50:39.073467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.617 [2024-12-09 11:50:39.232858] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e7a080/0x1e7e570) succeed. 00:11:31.617 [2024-12-09 11:50:39.244017] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e7b670/0x1ebfc10) succeed. 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.617 [2024-12-09 11:50:39.358021] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.617 NULL1 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3175972 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:31.617 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.618 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.877 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.877 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:31.877 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.877 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.877 11:50:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.135 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.135 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:32.135 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.135 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.135 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.393 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.393 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:32.393 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.393 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.393 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.959 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.959 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:32.959 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.959 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.959 11:50:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.217 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.217 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:33.217 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.217 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.217 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.475 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.475 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:33.475 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.475 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.475 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.733 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:33.733 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.733 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.733 11:50:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.300 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.300 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:34.300 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.300 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.300 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.559 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.559 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:34.559 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.559 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.559 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.817 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.817 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:34.817 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.817 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.817 11:50:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.075 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.075 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:35.075 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.075 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.075 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.334 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.334 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:35.334 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.334 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.334 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.900 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.900 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:35.900 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.900 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.900 11:50:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.159 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.159 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:36.159 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.159 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.159 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.418 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.418 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:36.418 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.418 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.418 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.677 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.677 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:36.677 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.677 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.677 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.243 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.243 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:37.243 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.243 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.243 11:50:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.501 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.501 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:37.501 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.501 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.501 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.760 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.760 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:37.760 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.760 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.760 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:38.018 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.018 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 11:50:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.277 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.277 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:38.277 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.277 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.277 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.844 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.844 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:38.844 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.844 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.844 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.103 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.103 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:39.103 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.103 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.103 11:50:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.361 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.361 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:39.361 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.361 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.361 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.620 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.620 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:39.620 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.620 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.620 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.188 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.188 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:40.188 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.188 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.188 11:50:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.446 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.446 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:40.446 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.446 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.446 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.705 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.705 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:40.705 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.705 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.705 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.963 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.963 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:40.964 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.964 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.964 11:50:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.222 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.222 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:41.222 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.222 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.222 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.789 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.789 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:41.789 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.789 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.789 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.789 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3175972 00:11:42.047 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3175972) - No such process 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3175972 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:42.047 rmmod nvme_rdma 00:11:42.047 rmmod nvme_fabrics 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3175830 ']' 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3175830 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3175830 ']' 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3175830 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.047 11:50:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175830 00:11:42.047 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:42.047 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:42.047 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175830' 00:11:42.047 killing process with pid 3175830 00:11:42.047 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3175830 00:11:42.047 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3175830 00:11:42.306 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.306 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:42.306 00:11:42.306 real 0m17.438s 00:11:42.306 user 0m41.057s 00:11:42.306 sys 0m6.484s 00:11:42.306 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.306 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.306 ************************************ 00:11:42.306 END TEST nvmf_connect_stress 00:11:42.306 ************************************ 00:11:42.306 11:50:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:42.306 11:50:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.306 11:50:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.306 11:50:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.306 ************************************ 00:11:42.306 START TEST nvmf_fused_ordering 00:11:42.306 ************************************ 00:11:42.306 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:42.572 * Looking for test storage... 00:11:42.573 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.573 --rc genhtml_branch_coverage=1 00:11:42.573 --rc genhtml_function_coverage=1 00:11:42.573 --rc genhtml_legend=1 00:11:42.573 --rc geninfo_all_blocks=1 00:11:42.573 --rc geninfo_unexecuted_blocks=1 00:11:42.573 00:11:42.573 ' 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.573 --rc genhtml_branch_coverage=1 00:11:42.573 --rc genhtml_function_coverage=1 00:11:42.573 --rc genhtml_legend=1 00:11:42.573 --rc geninfo_all_blocks=1 00:11:42.573 --rc geninfo_unexecuted_blocks=1 00:11:42.573 00:11:42.573 ' 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.573 --rc genhtml_branch_coverage=1 00:11:42.573 --rc genhtml_function_coverage=1 00:11:42.573 --rc genhtml_legend=1 00:11:42.573 --rc geninfo_all_blocks=1 00:11:42.573 --rc geninfo_unexecuted_blocks=1 00:11:42.573 00:11:42.573 ' 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.573 --rc genhtml_branch_coverage=1 00:11:42.573 --rc genhtml_function_coverage=1 00:11:42.573 --rc genhtml_legend=1 00:11:42.573 --rc geninfo_all_blocks=1 00:11:42.573 --rc geninfo_unexecuted_blocks=1 00:11:42.573 00:11:42.573 ' 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.573 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.574 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.574 11:50:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:49.150 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:49.151 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:49.151 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:49.151 Found net devices under 0000:da:00.0: mlx_0_0 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:49.151 Found net devices under 0000:da:00.1: mlx_0_1 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:49.151 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:49.151 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:49.151 altname enp218s0f0np0 00:11:49.151 altname ens818f0np0 00:11:49.151 inet 192.168.100.8/24 scope global mlx_0_0 00:11:49.151 valid_lft forever preferred_lft forever 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:49.151 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:49.151 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:49.151 altname enp218s0f1np1 00:11:49.151 altname ens818f1np1 00:11:49.151 inet 192.168.100.9/24 scope global mlx_0_1 00:11:49.151 valid_lft forever preferred_lft forever 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.151 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:49.152 192.168.100.9' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:49.152 192.168.100.9' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:49.152 192.168.100.9' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3180899 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3180899 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3180899 ']' 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.152 [2024-12-09 11:50:56.453446] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:11:49.152 [2024-12-09 11:50:56.453489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.152 [2024-12-09 11:50:56.529362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.152 [2024-12-09 11:50:56.570000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.152 [2024-12-09 11:50:56.570035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.152 [2024-12-09 11:50:56.570042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.152 [2024-12-09 11:50:56.570050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.152 [2024-12-09 11:50:56.570056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.152 [2024-12-09 11:50:56.570630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.152 [2024-12-09 11:50:56.735453] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2465930/0x2469e20) succeed. 00:11:49.152 [2024-12-09 11:50:56.745078] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2466de0/0x24ab4c0) succeed. 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.152 [2024-12-09 11:50:56.795049] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.152 NULL1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.152 11:50:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:49.152 [2024-12-09 11:50:56.852123] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:11:49.152 [2024-12-09 11:50:56.852154] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3180925 ] 00:11:49.152 Attached to nqn.2016-06.io.spdk:cnode1 00:11:49.152 Namespace ID: 1 size: 1GB 00:11:49.152 fused_ordering(0) 00:11:49.152 fused_ordering(1) 00:11:49.152 fused_ordering(2) 00:11:49.152 fused_ordering(3) 00:11:49.152 fused_ordering(4) 00:11:49.152 fused_ordering(5) 00:11:49.152 fused_ordering(6) 00:11:49.152 fused_ordering(7) 00:11:49.152 fused_ordering(8) 00:11:49.152 fused_ordering(9) 00:11:49.152 fused_ordering(10) 00:11:49.152 fused_ordering(11) 00:11:49.152 fused_ordering(12) 00:11:49.152 fused_ordering(13) 00:11:49.152 fused_ordering(14) 00:11:49.152 fused_ordering(15) 00:11:49.152 fused_ordering(16) 00:11:49.152 fused_ordering(17) 00:11:49.152 fused_ordering(18) 00:11:49.152 fused_ordering(19) 00:11:49.152 fused_ordering(20) 00:11:49.152 fused_ordering(21) 00:11:49.152 fused_ordering(22) 00:11:49.152 fused_ordering(23) 00:11:49.152 fused_ordering(24) 00:11:49.153 fused_ordering(25) 00:11:49.153 fused_ordering(26) 00:11:49.153 fused_ordering(27) 00:11:49.153 fused_ordering(28) 00:11:49.153 fused_ordering(29) 00:11:49.153 fused_ordering(30) 00:11:49.153 fused_ordering(31) 00:11:49.153 fused_ordering(32) 00:11:49.153 fused_ordering(33) 00:11:49.153 fused_ordering(34) 00:11:49.153 fused_ordering(35) 00:11:49.153 fused_ordering(36) 00:11:49.153 fused_ordering(37) 00:11:49.153 fused_ordering(38) 00:11:49.153 fused_ordering(39) 00:11:49.153 fused_ordering(40) 00:11:49.153 fused_ordering(41) 00:11:49.153 fused_ordering(42) 00:11:49.153 fused_ordering(43) 00:11:49.153 fused_ordering(44) 00:11:49.153 fused_ordering(45) 00:11:49.153 fused_ordering(46) 00:11:49.153 fused_ordering(47) 00:11:49.153 fused_ordering(48) 00:11:49.153 fused_ordering(49) 00:11:49.153 fused_ordering(50) 00:11:49.153 fused_ordering(51) 00:11:49.153 fused_ordering(52) 00:11:49.153 fused_ordering(53) 00:11:49.153 fused_ordering(54) 00:11:49.153 fused_ordering(55) 00:11:49.153 fused_ordering(56) 00:11:49.153 fused_ordering(57) 00:11:49.153 fused_ordering(58) 00:11:49.153 fused_ordering(59) 00:11:49.153 fused_ordering(60) 00:11:49.153 fused_ordering(61) 00:11:49.153 fused_ordering(62) 00:11:49.153 fused_ordering(63) 00:11:49.153 fused_ordering(64) 00:11:49.153 fused_ordering(65) 00:11:49.153 fused_ordering(66) 00:11:49.153 fused_ordering(67) 00:11:49.153 fused_ordering(68) 00:11:49.153 fused_ordering(69) 00:11:49.153 fused_ordering(70) 00:11:49.153 fused_ordering(71) 00:11:49.153 fused_ordering(72) 00:11:49.153 fused_ordering(73) 00:11:49.153 fused_ordering(74) 00:11:49.153 fused_ordering(75) 00:11:49.153 fused_ordering(76) 00:11:49.153 fused_ordering(77) 00:11:49.153 fused_ordering(78) 00:11:49.153 fused_ordering(79) 00:11:49.153 fused_ordering(80) 00:11:49.153 fused_ordering(81) 00:11:49.153 fused_ordering(82) 00:11:49.153 fused_ordering(83) 00:11:49.153 fused_ordering(84) 00:11:49.153 fused_ordering(85) 00:11:49.153 fused_ordering(86) 00:11:49.153 fused_ordering(87) 00:11:49.153 fused_ordering(88) 00:11:49.153 fused_ordering(89) 00:11:49.153 fused_ordering(90) 00:11:49.153 fused_ordering(91) 00:11:49.153 fused_ordering(92) 00:11:49.153 fused_ordering(93) 00:11:49.153 fused_ordering(94) 00:11:49.153 fused_ordering(95) 00:11:49.153 fused_ordering(96) 00:11:49.153 fused_ordering(97) 00:11:49.153 fused_ordering(98) 00:11:49.153 fused_ordering(99) 00:11:49.153 fused_ordering(100) 00:11:49.153 fused_ordering(101) 00:11:49.153 fused_ordering(102) 00:11:49.153 fused_ordering(103) 00:11:49.153 fused_ordering(104) 00:11:49.153 fused_ordering(105) 00:11:49.153 fused_ordering(106) 00:11:49.153 fused_ordering(107) 00:11:49.153 fused_ordering(108) 00:11:49.153 fused_ordering(109) 00:11:49.153 fused_ordering(110) 00:11:49.153 fused_ordering(111) 00:11:49.153 fused_ordering(112) 00:11:49.153 fused_ordering(113) 00:11:49.153 fused_ordering(114) 00:11:49.153 fused_ordering(115) 00:11:49.153 fused_ordering(116) 00:11:49.153 fused_ordering(117) 00:11:49.153 fused_ordering(118) 00:11:49.153 fused_ordering(119) 00:11:49.153 fused_ordering(120) 00:11:49.153 fused_ordering(121) 00:11:49.153 fused_ordering(122) 00:11:49.153 fused_ordering(123) 00:11:49.153 fused_ordering(124) 00:11:49.153 fused_ordering(125) 00:11:49.153 fused_ordering(126) 00:11:49.153 fused_ordering(127) 00:11:49.153 fused_ordering(128) 00:11:49.153 fused_ordering(129) 00:11:49.153 fused_ordering(130) 00:11:49.153 fused_ordering(131) 00:11:49.153 fused_ordering(132) 00:11:49.153 fused_ordering(133) 00:11:49.153 fused_ordering(134) 00:11:49.153 fused_ordering(135) 00:11:49.153 fused_ordering(136) 00:11:49.153 fused_ordering(137) 00:11:49.153 fused_ordering(138) 00:11:49.153 fused_ordering(139) 00:11:49.153 fused_ordering(140) 00:11:49.153 fused_ordering(141) 00:11:49.153 fused_ordering(142) 00:11:49.153 fused_ordering(143) 00:11:49.153 fused_ordering(144) 00:11:49.153 fused_ordering(145) 00:11:49.153 fused_ordering(146) 00:11:49.153 fused_ordering(147) 00:11:49.153 fused_ordering(148) 00:11:49.153 fused_ordering(149) 00:11:49.153 fused_ordering(150) 00:11:49.153 fused_ordering(151) 00:11:49.153 fused_ordering(152) 00:11:49.153 fused_ordering(153) 00:11:49.153 fused_ordering(154) 00:11:49.153 fused_ordering(155) 00:11:49.153 fused_ordering(156) 00:11:49.153 fused_ordering(157) 00:11:49.153 fused_ordering(158) 00:11:49.153 fused_ordering(159) 00:11:49.153 fused_ordering(160) 00:11:49.153 fused_ordering(161) 00:11:49.153 fused_ordering(162) 00:11:49.153 fused_ordering(163) 00:11:49.153 fused_ordering(164) 00:11:49.153 fused_ordering(165) 00:11:49.153 fused_ordering(166) 00:11:49.153 fused_ordering(167) 00:11:49.153 fused_ordering(168) 00:11:49.153 fused_ordering(169) 00:11:49.153 fused_ordering(170) 00:11:49.153 fused_ordering(171) 00:11:49.153 fused_ordering(172) 00:11:49.153 fused_ordering(173) 00:11:49.153 fused_ordering(174) 00:11:49.153 fused_ordering(175) 00:11:49.153 fused_ordering(176) 00:11:49.153 fused_ordering(177) 00:11:49.153 fused_ordering(178) 00:11:49.153 fused_ordering(179) 00:11:49.153 fused_ordering(180) 00:11:49.153 fused_ordering(181) 00:11:49.153 fused_ordering(182) 00:11:49.153 fused_ordering(183) 00:11:49.153 fused_ordering(184) 00:11:49.153 fused_ordering(185) 00:11:49.153 fused_ordering(186) 00:11:49.153 fused_ordering(187) 00:11:49.153 fused_ordering(188) 00:11:49.153 fused_ordering(189) 00:11:49.153 fused_ordering(190) 00:11:49.153 fused_ordering(191) 00:11:49.153 fused_ordering(192) 00:11:49.153 fused_ordering(193) 00:11:49.153 fused_ordering(194) 00:11:49.153 fused_ordering(195) 00:11:49.153 fused_ordering(196) 00:11:49.153 fused_ordering(197) 00:11:49.153 fused_ordering(198) 00:11:49.153 fused_ordering(199) 00:11:49.153 fused_ordering(200) 00:11:49.153 fused_ordering(201) 00:11:49.153 fused_ordering(202) 00:11:49.153 fused_ordering(203) 00:11:49.153 fused_ordering(204) 00:11:49.153 fused_ordering(205) 00:11:49.153 fused_ordering(206) 00:11:49.153 fused_ordering(207) 00:11:49.153 fused_ordering(208) 00:11:49.153 fused_ordering(209) 00:11:49.153 fused_ordering(210) 00:11:49.153 fused_ordering(211) 00:11:49.153 fused_ordering(212) 00:11:49.153 fused_ordering(213) 00:11:49.153 fused_ordering(214) 00:11:49.153 fused_ordering(215) 00:11:49.153 fused_ordering(216) 00:11:49.153 fused_ordering(217) 00:11:49.153 fused_ordering(218) 00:11:49.153 fused_ordering(219) 00:11:49.153 fused_ordering(220) 00:11:49.153 fused_ordering(221) 00:11:49.153 fused_ordering(222) 00:11:49.153 fused_ordering(223) 00:11:49.153 fused_ordering(224) 00:11:49.153 fused_ordering(225) 00:11:49.153 fused_ordering(226) 00:11:49.153 fused_ordering(227) 00:11:49.153 fused_ordering(228) 00:11:49.153 fused_ordering(229) 00:11:49.153 fused_ordering(230) 00:11:49.153 fused_ordering(231) 00:11:49.153 fused_ordering(232) 00:11:49.153 fused_ordering(233) 00:11:49.153 fused_ordering(234) 00:11:49.153 fused_ordering(235) 00:11:49.153 fused_ordering(236) 00:11:49.153 fused_ordering(237) 00:11:49.153 fused_ordering(238) 00:11:49.153 fused_ordering(239) 00:11:49.153 fused_ordering(240) 00:11:49.153 fused_ordering(241) 00:11:49.153 fused_ordering(242) 00:11:49.153 fused_ordering(243) 00:11:49.153 fused_ordering(244) 00:11:49.153 fused_ordering(245) 00:11:49.153 fused_ordering(246) 00:11:49.153 fused_ordering(247) 00:11:49.153 fused_ordering(248) 00:11:49.153 fused_ordering(249) 00:11:49.153 fused_ordering(250) 00:11:49.153 fused_ordering(251) 00:11:49.153 fused_ordering(252) 00:11:49.153 fused_ordering(253) 00:11:49.153 fused_ordering(254) 00:11:49.153 fused_ordering(255) 00:11:49.153 fused_ordering(256) 00:11:49.153 fused_ordering(257) 00:11:49.153 fused_ordering(258) 00:11:49.153 fused_ordering(259) 00:11:49.153 fused_ordering(260) 00:11:49.153 fused_ordering(261) 00:11:49.153 fused_ordering(262) 00:11:49.153 fused_ordering(263) 00:11:49.153 fused_ordering(264) 00:11:49.153 fused_ordering(265) 00:11:49.153 fused_ordering(266) 00:11:49.153 fused_ordering(267) 00:11:49.153 fused_ordering(268) 00:11:49.153 fused_ordering(269) 00:11:49.153 fused_ordering(270) 00:11:49.153 fused_ordering(271) 00:11:49.153 fused_ordering(272) 00:11:49.153 fused_ordering(273) 00:11:49.153 fused_ordering(274) 00:11:49.153 fused_ordering(275) 00:11:49.153 fused_ordering(276) 00:11:49.153 fused_ordering(277) 00:11:49.153 fused_ordering(278) 00:11:49.153 fused_ordering(279) 00:11:49.153 fused_ordering(280) 00:11:49.153 fused_ordering(281) 00:11:49.153 fused_ordering(282) 00:11:49.153 fused_ordering(283) 00:11:49.153 fused_ordering(284) 00:11:49.153 fused_ordering(285) 00:11:49.153 fused_ordering(286) 00:11:49.153 fused_ordering(287) 00:11:49.153 fused_ordering(288) 00:11:49.153 fused_ordering(289) 00:11:49.153 fused_ordering(290) 00:11:49.153 fused_ordering(291) 00:11:49.153 fused_ordering(292) 00:11:49.153 fused_ordering(293) 00:11:49.153 fused_ordering(294) 00:11:49.153 fused_ordering(295) 00:11:49.153 fused_ordering(296) 00:11:49.153 fused_ordering(297) 00:11:49.153 fused_ordering(298) 00:11:49.153 fused_ordering(299) 00:11:49.153 fused_ordering(300) 00:11:49.153 fused_ordering(301) 00:11:49.153 fused_ordering(302) 00:11:49.153 fused_ordering(303) 00:11:49.153 fused_ordering(304) 00:11:49.153 fused_ordering(305) 00:11:49.153 fused_ordering(306) 00:11:49.153 fused_ordering(307) 00:11:49.153 fused_ordering(308) 00:11:49.153 fused_ordering(309) 00:11:49.153 fused_ordering(310) 00:11:49.153 fused_ordering(311) 00:11:49.153 fused_ordering(312) 00:11:49.153 fused_ordering(313) 00:11:49.153 fused_ordering(314) 00:11:49.153 fused_ordering(315) 00:11:49.153 fused_ordering(316) 00:11:49.153 fused_ordering(317) 00:11:49.154 fused_ordering(318) 00:11:49.154 fused_ordering(319) 00:11:49.154 fused_ordering(320) 00:11:49.154 fused_ordering(321) 00:11:49.154 fused_ordering(322) 00:11:49.154 fused_ordering(323) 00:11:49.154 fused_ordering(324) 00:11:49.154 fused_ordering(325) 00:11:49.154 fused_ordering(326) 00:11:49.154 fused_ordering(327) 00:11:49.154 fused_ordering(328) 00:11:49.154 fused_ordering(329) 00:11:49.154 fused_ordering(330) 00:11:49.154 fused_ordering(331) 00:11:49.154 fused_ordering(332) 00:11:49.154 fused_ordering(333) 00:11:49.154 fused_ordering(334) 00:11:49.154 fused_ordering(335) 00:11:49.154 fused_ordering(336) 00:11:49.154 fused_ordering(337) 00:11:49.154 fused_ordering(338) 00:11:49.154 fused_ordering(339) 00:11:49.154 fused_ordering(340) 00:11:49.154 fused_ordering(341) 00:11:49.154 fused_ordering(342) 00:11:49.154 fused_ordering(343) 00:11:49.154 fused_ordering(344) 00:11:49.154 fused_ordering(345) 00:11:49.154 fused_ordering(346) 00:11:49.154 fused_ordering(347) 00:11:49.154 fused_ordering(348) 00:11:49.154 fused_ordering(349) 00:11:49.154 fused_ordering(350) 00:11:49.154 fused_ordering(351) 00:11:49.154 fused_ordering(352) 00:11:49.154 fused_ordering(353) 00:11:49.154 fused_ordering(354) 00:11:49.154 fused_ordering(355) 00:11:49.154 fused_ordering(356) 00:11:49.154 fused_ordering(357) 00:11:49.154 fused_ordering(358) 00:11:49.154 fused_ordering(359) 00:11:49.154 fused_ordering(360) 00:11:49.154 fused_ordering(361) 00:11:49.154 fused_ordering(362) 00:11:49.154 fused_ordering(363) 00:11:49.154 fused_ordering(364) 00:11:49.154 fused_ordering(365) 00:11:49.154 fused_ordering(366) 00:11:49.154 fused_ordering(367) 00:11:49.154 fused_ordering(368) 00:11:49.154 fused_ordering(369) 00:11:49.154 fused_ordering(370) 00:11:49.154 fused_ordering(371) 00:11:49.154 fused_ordering(372) 00:11:49.154 fused_ordering(373) 00:11:49.154 fused_ordering(374) 00:11:49.154 fused_ordering(375) 00:11:49.154 fused_ordering(376) 00:11:49.154 fused_ordering(377) 00:11:49.154 fused_ordering(378) 00:11:49.154 fused_ordering(379) 00:11:49.154 fused_ordering(380) 00:11:49.154 fused_ordering(381) 00:11:49.154 fused_ordering(382) 00:11:49.154 fused_ordering(383) 00:11:49.154 fused_ordering(384) 00:11:49.154 fused_ordering(385) 00:11:49.154 fused_ordering(386) 00:11:49.154 fused_ordering(387) 00:11:49.154 fused_ordering(388) 00:11:49.154 fused_ordering(389) 00:11:49.154 fused_ordering(390) 00:11:49.154 fused_ordering(391) 00:11:49.154 fused_ordering(392) 00:11:49.154 fused_ordering(393) 00:11:49.154 fused_ordering(394) 00:11:49.154 fused_ordering(395) 00:11:49.154 fused_ordering(396) 00:11:49.154 fused_ordering(397) 00:11:49.154 fused_ordering(398) 00:11:49.154 fused_ordering(399) 00:11:49.154 fused_ordering(400) 00:11:49.154 fused_ordering(401) 00:11:49.154 fused_ordering(402) 00:11:49.154 fused_ordering(403) 00:11:49.154 fused_ordering(404) 00:11:49.154 fused_ordering(405) 00:11:49.154 fused_ordering(406) 00:11:49.154 fused_ordering(407) 00:11:49.154 fused_ordering(408) 00:11:49.154 fused_ordering(409) 00:11:49.154 fused_ordering(410) 00:11:49.413 fused_ordering(411) 00:11:49.413 fused_ordering(412) 00:11:49.413 fused_ordering(413) 00:11:49.413 fused_ordering(414) 00:11:49.413 fused_ordering(415) 00:11:49.413 fused_ordering(416) 00:11:49.413 fused_ordering(417) 00:11:49.413 fused_ordering(418) 00:11:49.413 fused_ordering(419) 00:11:49.413 fused_ordering(420) 00:11:49.413 fused_ordering(421) 00:11:49.413 fused_ordering(422) 00:11:49.413 fused_ordering(423) 00:11:49.413 fused_ordering(424) 00:11:49.413 fused_ordering(425) 00:11:49.413 fused_ordering(426) 00:11:49.413 fused_ordering(427) 00:11:49.413 fused_ordering(428) 00:11:49.413 fused_ordering(429) 00:11:49.413 fused_ordering(430) 00:11:49.413 fused_ordering(431) 00:11:49.413 fused_ordering(432) 00:11:49.413 fused_ordering(433) 00:11:49.413 fused_ordering(434) 00:11:49.413 fused_ordering(435) 00:11:49.413 fused_ordering(436) 00:11:49.413 fused_ordering(437) 00:11:49.413 fused_ordering(438) 00:11:49.413 fused_ordering(439) 00:11:49.413 fused_ordering(440) 00:11:49.413 fused_ordering(441) 00:11:49.413 fused_ordering(442) 00:11:49.413 fused_ordering(443) 00:11:49.413 fused_ordering(444) 00:11:49.413 fused_ordering(445) 00:11:49.413 fused_ordering(446) 00:11:49.413 fused_ordering(447) 00:11:49.413 fused_ordering(448) 00:11:49.413 fused_ordering(449) 00:11:49.413 fused_ordering(450) 00:11:49.413 fused_ordering(451) 00:11:49.413 fused_ordering(452) 00:11:49.413 fused_ordering(453) 00:11:49.413 fused_ordering(454) 00:11:49.413 fused_ordering(455) 00:11:49.413 fused_ordering(456) 00:11:49.413 fused_ordering(457) 00:11:49.413 fused_ordering(458) 00:11:49.413 fused_ordering(459) 00:11:49.413 fused_ordering(460) 00:11:49.413 fused_ordering(461) 00:11:49.413 fused_ordering(462) 00:11:49.413 fused_ordering(463) 00:11:49.413 fused_ordering(464) 00:11:49.413 fused_ordering(465) 00:11:49.413 fused_ordering(466) 00:11:49.413 fused_ordering(467) 00:11:49.413 fused_ordering(468) 00:11:49.413 fused_ordering(469) 00:11:49.413 fused_ordering(470) 00:11:49.413 fused_ordering(471) 00:11:49.413 fused_ordering(472) 00:11:49.413 fused_ordering(473) 00:11:49.413 fused_ordering(474) 00:11:49.413 fused_ordering(475) 00:11:49.413 fused_ordering(476) 00:11:49.413 fused_ordering(477) 00:11:49.413 fused_ordering(478) 00:11:49.413 fused_ordering(479) 00:11:49.413 fused_ordering(480) 00:11:49.413 fused_ordering(481) 00:11:49.413 fused_ordering(482) 00:11:49.413 fused_ordering(483) 00:11:49.413 fused_ordering(484) 00:11:49.413 fused_ordering(485) 00:11:49.413 fused_ordering(486) 00:11:49.413 fused_ordering(487) 00:11:49.413 fused_ordering(488) 00:11:49.413 fused_ordering(489) 00:11:49.413 fused_ordering(490) 00:11:49.413 fused_ordering(491) 00:11:49.413 fused_ordering(492) 00:11:49.413 fused_ordering(493) 00:11:49.413 fused_ordering(494) 00:11:49.413 fused_ordering(495) 00:11:49.413 fused_ordering(496) 00:11:49.413 fused_ordering(497) 00:11:49.413 fused_ordering(498) 00:11:49.413 fused_ordering(499) 00:11:49.413 fused_ordering(500) 00:11:49.413 fused_ordering(501) 00:11:49.413 fused_ordering(502) 00:11:49.413 fused_ordering(503) 00:11:49.413 fused_ordering(504) 00:11:49.413 fused_ordering(505) 00:11:49.413 fused_ordering(506) 00:11:49.413 fused_ordering(507) 00:11:49.413 fused_ordering(508) 00:11:49.413 fused_ordering(509) 00:11:49.413 fused_ordering(510) 00:11:49.413 fused_ordering(511) 00:11:49.413 fused_ordering(512) 00:11:49.413 fused_ordering(513) 00:11:49.413 fused_ordering(514) 00:11:49.413 fused_ordering(515) 00:11:49.413 fused_ordering(516) 00:11:49.413 fused_ordering(517) 00:11:49.413 fused_ordering(518) 00:11:49.413 fused_ordering(519) 00:11:49.413 fused_ordering(520) 00:11:49.413 fused_ordering(521) 00:11:49.413 fused_ordering(522) 00:11:49.413 fused_ordering(523) 00:11:49.413 fused_ordering(524) 00:11:49.413 fused_ordering(525) 00:11:49.413 fused_ordering(526) 00:11:49.413 fused_ordering(527) 00:11:49.413 fused_ordering(528) 00:11:49.413 fused_ordering(529) 00:11:49.413 fused_ordering(530) 00:11:49.413 fused_ordering(531) 00:11:49.413 fused_ordering(532) 00:11:49.413 fused_ordering(533) 00:11:49.413 fused_ordering(534) 00:11:49.413 fused_ordering(535) 00:11:49.413 fused_ordering(536) 00:11:49.413 fused_ordering(537) 00:11:49.413 fused_ordering(538) 00:11:49.413 fused_ordering(539) 00:11:49.413 fused_ordering(540) 00:11:49.413 fused_ordering(541) 00:11:49.413 fused_ordering(542) 00:11:49.413 fused_ordering(543) 00:11:49.413 fused_ordering(544) 00:11:49.413 fused_ordering(545) 00:11:49.413 fused_ordering(546) 00:11:49.413 fused_ordering(547) 00:11:49.413 fused_ordering(548) 00:11:49.413 fused_ordering(549) 00:11:49.413 fused_ordering(550) 00:11:49.413 fused_ordering(551) 00:11:49.413 fused_ordering(552) 00:11:49.413 fused_ordering(553) 00:11:49.413 fused_ordering(554) 00:11:49.413 fused_ordering(555) 00:11:49.413 fused_ordering(556) 00:11:49.413 fused_ordering(557) 00:11:49.413 fused_ordering(558) 00:11:49.413 fused_ordering(559) 00:11:49.413 fused_ordering(560) 00:11:49.413 fused_ordering(561) 00:11:49.413 fused_ordering(562) 00:11:49.413 fused_ordering(563) 00:11:49.413 fused_ordering(564) 00:11:49.413 fused_ordering(565) 00:11:49.413 fused_ordering(566) 00:11:49.413 fused_ordering(567) 00:11:49.413 fused_ordering(568) 00:11:49.413 fused_ordering(569) 00:11:49.413 fused_ordering(570) 00:11:49.413 fused_ordering(571) 00:11:49.413 fused_ordering(572) 00:11:49.413 fused_ordering(573) 00:11:49.413 fused_ordering(574) 00:11:49.413 fused_ordering(575) 00:11:49.413 fused_ordering(576) 00:11:49.413 fused_ordering(577) 00:11:49.413 fused_ordering(578) 00:11:49.413 fused_ordering(579) 00:11:49.413 fused_ordering(580) 00:11:49.413 fused_ordering(581) 00:11:49.413 fused_ordering(582) 00:11:49.413 fused_ordering(583) 00:11:49.413 fused_ordering(584) 00:11:49.413 fused_ordering(585) 00:11:49.413 fused_ordering(586) 00:11:49.413 fused_ordering(587) 00:11:49.413 fused_ordering(588) 00:11:49.413 fused_ordering(589) 00:11:49.413 fused_ordering(590) 00:11:49.413 fused_ordering(591) 00:11:49.413 fused_ordering(592) 00:11:49.413 fused_ordering(593) 00:11:49.413 fused_ordering(594) 00:11:49.413 fused_ordering(595) 00:11:49.413 fused_ordering(596) 00:11:49.413 fused_ordering(597) 00:11:49.413 fused_ordering(598) 00:11:49.413 fused_ordering(599) 00:11:49.413 fused_ordering(600) 00:11:49.413 fused_ordering(601) 00:11:49.413 fused_ordering(602) 00:11:49.413 fused_ordering(603) 00:11:49.413 fused_ordering(604) 00:11:49.413 fused_ordering(605) 00:11:49.413 fused_ordering(606) 00:11:49.413 fused_ordering(607) 00:11:49.413 fused_ordering(608) 00:11:49.413 fused_ordering(609) 00:11:49.413 fused_ordering(610) 00:11:49.413 fused_ordering(611) 00:11:49.413 fused_ordering(612) 00:11:49.414 fused_ordering(613) 00:11:49.414 fused_ordering(614) 00:11:49.414 fused_ordering(615) 00:11:49.414 fused_ordering(616) 00:11:49.414 fused_ordering(617) 00:11:49.414 fused_ordering(618) 00:11:49.414 fused_ordering(619) 00:11:49.414 fused_ordering(620) 00:11:49.414 fused_ordering(621) 00:11:49.414 fused_ordering(622) 00:11:49.414 fused_ordering(623) 00:11:49.414 fused_ordering(624) 00:11:49.414 fused_ordering(625) 00:11:49.414 fused_ordering(626) 00:11:49.414 fused_ordering(627) 00:11:49.414 fused_ordering(628) 00:11:49.414 fused_ordering(629) 00:11:49.414 fused_ordering(630) 00:11:49.414 fused_ordering(631) 00:11:49.414 fused_ordering(632) 00:11:49.414 fused_ordering(633) 00:11:49.414 fused_ordering(634) 00:11:49.414 fused_ordering(635) 00:11:49.414 fused_ordering(636) 00:11:49.414 fused_ordering(637) 00:11:49.414 fused_ordering(638) 00:11:49.414 fused_ordering(639) 00:11:49.414 fused_ordering(640) 00:11:49.414 fused_ordering(641) 00:11:49.414 fused_ordering(642) 00:11:49.414 fused_ordering(643) 00:11:49.414 fused_ordering(644) 00:11:49.414 fused_ordering(645) 00:11:49.414 fused_ordering(646) 00:11:49.414 fused_ordering(647) 00:11:49.414 fused_ordering(648) 00:11:49.414 fused_ordering(649) 00:11:49.414 fused_ordering(650) 00:11:49.414 fused_ordering(651) 00:11:49.414 fused_ordering(652) 00:11:49.414 fused_ordering(653) 00:11:49.414 fused_ordering(654) 00:11:49.414 fused_ordering(655) 00:11:49.414 fused_ordering(656) 00:11:49.414 fused_ordering(657) 00:11:49.414 fused_ordering(658) 00:11:49.414 fused_ordering(659) 00:11:49.414 fused_ordering(660) 00:11:49.414 fused_ordering(661) 00:11:49.414 fused_ordering(662) 00:11:49.414 fused_ordering(663) 00:11:49.414 fused_ordering(664) 00:11:49.414 fused_ordering(665) 00:11:49.414 fused_ordering(666) 00:11:49.414 fused_ordering(667) 00:11:49.414 fused_ordering(668) 00:11:49.414 fused_ordering(669) 00:11:49.414 fused_ordering(670) 00:11:49.414 fused_ordering(671) 00:11:49.414 fused_ordering(672) 00:11:49.414 fused_ordering(673) 00:11:49.414 fused_ordering(674) 00:11:49.414 fused_ordering(675) 00:11:49.414 fused_ordering(676) 00:11:49.414 fused_ordering(677) 00:11:49.414 fused_ordering(678) 00:11:49.414 fused_ordering(679) 00:11:49.414 fused_ordering(680) 00:11:49.414 fused_ordering(681) 00:11:49.414 fused_ordering(682) 00:11:49.414 fused_ordering(683) 00:11:49.414 fused_ordering(684) 00:11:49.414 fused_ordering(685) 00:11:49.414 fused_ordering(686) 00:11:49.414 fused_ordering(687) 00:11:49.414 fused_ordering(688) 00:11:49.414 fused_ordering(689) 00:11:49.414 fused_ordering(690) 00:11:49.414 fused_ordering(691) 00:11:49.414 fused_ordering(692) 00:11:49.414 fused_ordering(693) 00:11:49.414 fused_ordering(694) 00:11:49.414 fused_ordering(695) 00:11:49.414 fused_ordering(696) 00:11:49.414 fused_ordering(697) 00:11:49.414 fused_ordering(698) 00:11:49.414 fused_ordering(699) 00:11:49.414 fused_ordering(700) 00:11:49.414 fused_ordering(701) 00:11:49.414 fused_ordering(702) 00:11:49.414 fused_ordering(703) 00:11:49.414 fused_ordering(704) 00:11:49.414 fused_ordering(705) 00:11:49.414 fused_ordering(706) 00:11:49.414 fused_ordering(707) 00:11:49.414 fused_ordering(708) 00:11:49.414 fused_ordering(709) 00:11:49.414 fused_ordering(710) 00:11:49.414 fused_ordering(711) 00:11:49.414 fused_ordering(712) 00:11:49.414 fused_ordering(713) 00:11:49.414 fused_ordering(714) 00:11:49.414 fused_ordering(715) 00:11:49.414 fused_ordering(716) 00:11:49.414 fused_ordering(717) 00:11:49.414 fused_ordering(718) 00:11:49.414 fused_ordering(719) 00:11:49.414 fused_ordering(720) 00:11:49.414 fused_ordering(721) 00:11:49.414 fused_ordering(722) 00:11:49.414 fused_ordering(723) 00:11:49.414 fused_ordering(724) 00:11:49.414 fused_ordering(725) 00:11:49.414 fused_ordering(726) 00:11:49.414 fused_ordering(727) 00:11:49.414 fused_ordering(728) 00:11:49.414 fused_ordering(729) 00:11:49.414 fused_ordering(730) 00:11:49.414 fused_ordering(731) 00:11:49.414 fused_ordering(732) 00:11:49.414 fused_ordering(733) 00:11:49.414 fused_ordering(734) 00:11:49.414 fused_ordering(735) 00:11:49.414 fused_ordering(736) 00:11:49.414 fused_ordering(737) 00:11:49.414 fused_ordering(738) 00:11:49.414 fused_ordering(739) 00:11:49.414 fused_ordering(740) 00:11:49.414 fused_ordering(741) 00:11:49.414 fused_ordering(742) 00:11:49.414 fused_ordering(743) 00:11:49.414 fused_ordering(744) 00:11:49.414 fused_ordering(745) 00:11:49.414 fused_ordering(746) 00:11:49.414 fused_ordering(747) 00:11:49.414 fused_ordering(748) 00:11:49.414 fused_ordering(749) 00:11:49.414 fused_ordering(750) 00:11:49.414 fused_ordering(751) 00:11:49.414 fused_ordering(752) 00:11:49.414 fused_ordering(753) 00:11:49.414 fused_ordering(754) 00:11:49.414 fused_ordering(755) 00:11:49.414 fused_ordering(756) 00:11:49.414 fused_ordering(757) 00:11:49.414 fused_ordering(758) 00:11:49.414 fused_ordering(759) 00:11:49.414 fused_ordering(760) 00:11:49.414 fused_ordering(761) 00:11:49.414 fused_ordering(762) 00:11:49.414 fused_ordering(763) 00:11:49.414 fused_ordering(764) 00:11:49.414 fused_ordering(765) 00:11:49.414 fused_ordering(766) 00:11:49.414 fused_ordering(767) 00:11:49.414 fused_ordering(768) 00:11:49.414 fused_ordering(769) 00:11:49.414 fused_ordering(770) 00:11:49.414 fused_ordering(771) 00:11:49.414 fused_ordering(772) 00:11:49.414 fused_ordering(773) 00:11:49.414 fused_ordering(774) 00:11:49.414 fused_ordering(775) 00:11:49.414 fused_ordering(776) 00:11:49.414 fused_ordering(777) 00:11:49.414 fused_ordering(778) 00:11:49.414 fused_ordering(779) 00:11:49.414 fused_ordering(780) 00:11:49.414 fused_ordering(781) 00:11:49.414 fused_ordering(782) 00:11:49.414 fused_ordering(783) 00:11:49.414 fused_ordering(784) 00:11:49.414 fused_ordering(785) 00:11:49.414 fused_ordering(786) 00:11:49.414 fused_ordering(787) 00:11:49.414 fused_ordering(788) 00:11:49.414 fused_ordering(789) 00:11:49.414 fused_ordering(790) 00:11:49.414 fused_ordering(791) 00:11:49.414 fused_ordering(792) 00:11:49.414 fused_ordering(793) 00:11:49.414 fused_ordering(794) 00:11:49.414 fused_ordering(795) 00:11:49.414 fused_ordering(796) 00:11:49.414 fused_ordering(797) 00:11:49.414 fused_ordering(798) 00:11:49.414 fused_ordering(799) 00:11:49.414 fused_ordering(800) 00:11:49.414 fused_ordering(801) 00:11:49.414 fused_ordering(802) 00:11:49.414 fused_ordering(803) 00:11:49.414 fused_ordering(804) 00:11:49.414 fused_ordering(805) 00:11:49.414 fused_ordering(806) 00:11:49.414 fused_ordering(807) 00:11:49.414 fused_ordering(808) 00:11:49.414 fused_ordering(809) 00:11:49.414 fused_ordering(810) 00:11:49.414 fused_ordering(811) 00:11:49.414 fused_ordering(812) 00:11:49.414 fused_ordering(813) 00:11:49.414 fused_ordering(814) 00:11:49.414 fused_ordering(815) 00:11:49.414 fused_ordering(816) 00:11:49.414 fused_ordering(817) 00:11:49.414 fused_ordering(818) 00:11:49.414 fused_ordering(819) 00:11:49.414 fused_ordering(820) 00:11:49.673 fused_ordering(821) 00:11:49.673 fused_ordering(822) 00:11:49.673 fused_ordering(823) 00:11:49.673 fused_ordering(824) 00:11:49.673 fused_ordering(825) 00:11:49.673 fused_ordering(826) 00:11:49.673 fused_ordering(827) 00:11:49.673 fused_ordering(828) 00:11:49.673 fused_ordering(829) 00:11:49.673 fused_ordering(830) 00:11:49.673 fused_ordering(831) 00:11:49.673 fused_ordering(832) 00:11:49.673 fused_ordering(833) 00:11:49.673 fused_ordering(834) 00:11:49.673 fused_ordering(835) 00:11:49.673 fused_ordering(836) 00:11:49.673 fused_ordering(837) 00:11:49.673 fused_ordering(838) 00:11:49.673 fused_ordering(839) 00:11:49.673 fused_ordering(840) 00:11:49.673 fused_ordering(841) 00:11:49.673 fused_ordering(842) 00:11:49.673 fused_ordering(843) 00:11:49.673 fused_ordering(844) 00:11:49.673 fused_ordering(845) 00:11:49.673 fused_ordering(846) 00:11:49.673 fused_ordering(847) 00:11:49.673 fused_ordering(848) 00:11:49.673 fused_ordering(849) 00:11:49.673 fused_ordering(850) 00:11:49.673 fused_ordering(851) 00:11:49.673 fused_ordering(852) 00:11:49.673 fused_ordering(853) 00:11:49.673 fused_ordering(854) 00:11:49.673 fused_ordering(855) 00:11:49.673 fused_ordering(856) 00:11:49.673 fused_ordering(857) 00:11:49.673 fused_ordering(858) 00:11:49.673 fused_ordering(859) 00:11:49.673 fused_ordering(860) 00:11:49.673 fused_ordering(861) 00:11:49.673 fused_ordering(862) 00:11:49.673 fused_ordering(863) 00:11:49.673 fused_ordering(864) 00:11:49.673 fused_ordering(865) 00:11:49.673 fused_ordering(866) 00:11:49.673 fused_ordering(867) 00:11:49.673 fused_ordering(868) 00:11:49.673 fused_ordering(869) 00:11:49.673 fused_ordering(870) 00:11:49.673 fused_ordering(871) 00:11:49.673 fused_ordering(872) 00:11:49.673 fused_ordering(873) 00:11:49.673 fused_ordering(874) 00:11:49.673 fused_ordering(875) 00:11:49.673 fused_ordering(876) 00:11:49.673 fused_ordering(877) 00:11:49.673 fused_ordering(878) 00:11:49.673 fused_ordering(879) 00:11:49.673 fused_ordering(880) 00:11:49.673 fused_ordering(881) 00:11:49.673 fused_ordering(882) 00:11:49.673 fused_ordering(883) 00:11:49.673 fused_ordering(884) 00:11:49.673 fused_ordering(885) 00:11:49.673 fused_ordering(886) 00:11:49.673 fused_ordering(887) 00:11:49.673 fused_ordering(888) 00:11:49.673 fused_ordering(889) 00:11:49.673 fused_ordering(890) 00:11:49.673 fused_ordering(891) 00:11:49.673 fused_ordering(892) 00:11:49.673 fused_ordering(893) 00:11:49.673 fused_ordering(894) 00:11:49.673 fused_ordering(895) 00:11:49.673 fused_ordering(896) 00:11:49.673 fused_ordering(897) 00:11:49.673 fused_ordering(898) 00:11:49.673 fused_ordering(899) 00:11:49.673 fused_ordering(900) 00:11:49.673 fused_ordering(901) 00:11:49.673 fused_ordering(902) 00:11:49.673 fused_ordering(903) 00:11:49.673 fused_ordering(904) 00:11:49.673 fused_ordering(905) 00:11:49.673 fused_ordering(906) 00:11:49.673 fused_ordering(907) 00:11:49.673 fused_ordering(908) 00:11:49.673 fused_ordering(909) 00:11:49.673 fused_ordering(910) 00:11:49.673 fused_ordering(911) 00:11:49.673 fused_ordering(912) 00:11:49.673 fused_ordering(913) 00:11:49.673 fused_ordering(914) 00:11:49.673 fused_ordering(915) 00:11:49.673 fused_ordering(916) 00:11:49.673 fused_ordering(917) 00:11:49.673 fused_ordering(918) 00:11:49.673 fused_ordering(919) 00:11:49.673 fused_ordering(920) 00:11:49.673 fused_ordering(921) 00:11:49.673 fused_ordering(922) 00:11:49.673 fused_ordering(923) 00:11:49.673 fused_ordering(924) 00:11:49.673 fused_ordering(925) 00:11:49.673 fused_ordering(926) 00:11:49.673 fused_ordering(927) 00:11:49.673 fused_ordering(928) 00:11:49.673 fused_ordering(929) 00:11:49.673 fused_ordering(930) 00:11:49.673 fused_ordering(931) 00:11:49.673 fused_ordering(932) 00:11:49.673 fused_ordering(933) 00:11:49.673 fused_ordering(934) 00:11:49.673 fused_ordering(935) 00:11:49.673 fused_ordering(936) 00:11:49.673 fused_ordering(937) 00:11:49.673 fused_ordering(938) 00:11:49.673 fused_ordering(939) 00:11:49.673 fused_ordering(940) 00:11:49.673 fused_ordering(941) 00:11:49.673 fused_ordering(942) 00:11:49.673 fused_ordering(943) 00:11:49.673 fused_ordering(944) 00:11:49.673 fused_ordering(945) 00:11:49.673 fused_ordering(946) 00:11:49.673 fused_ordering(947) 00:11:49.673 fused_ordering(948) 00:11:49.673 fused_ordering(949) 00:11:49.673 fused_ordering(950) 00:11:49.673 fused_ordering(951) 00:11:49.673 fused_ordering(952) 00:11:49.673 fused_ordering(953) 00:11:49.673 fused_ordering(954) 00:11:49.673 fused_ordering(955) 00:11:49.673 fused_ordering(956) 00:11:49.673 fused_ordering(957) 00:11:49.673 fused_ordering(958) 00:11:49.673 fused_ordering(959) 00:11:49.673 fused_ordering(960) 00:11:49.673 fused_ordering(961) 00:11:49.673 fused_ordering(962) 00:11:49.673 fused_ordering(963) 00:11:49.673 fused_ordering(964) 00:11:49.673 fused_ordering(965) 00:11:49.673 fused_ordering(966) 00:11:49.673 fused_ordering(967) 00:11:49.673 fused_ordering(968) 00:11:49.673 fused_ordering(969) 00:11:49.673 fused_ordering(970) 00:11:49.673 fused_ordering(971) 00:11:49.673 fused_ordering(972) 00:11:49.673 fused_ordering(973) 00:11:49.673 fused_ordering(974) 00:11:49.673 fused_ordering(975) 00:11:49.673 fused_ordering(976) 00:11:49.673 fused_ordering(977) 00:11:49.673 fused_ordering(978) 00:11:49.673 fused_ordering(979) 00:11:49.673 fused_ordering(980) 00:11:49.673 fused_ordering(981) 00:11:49.673 fused_ordering(982) 00:11:49.673 fused_ordering(983) 00:11:49.673 fused_ordering(984) 00:11:49.673 fused_ordering(985) 00:11:49.673 fused_ordering(986) 00:11:49.673 fused_ordering(987) 00:11:49.673 fused_ordering(988) 00:11:49.673 fused_ordering(989) 00:11:49.673 fused_ordering(990) 00:11:49.673 fused_ordering(991) 00:11:49.673 fused_ordering(992) 00:11:49.673 fused_ordering(993) 00:11:49.673 fused_ordering(994) 00:11:49.673 fused_ordering(995) 00:11:49.674 fused_ordering(996) 00:11:49.674 fused_ordering(997) 00:11:49.674 fused_ordering(998) 00:11:49.674 fused_ordering(999) 00:11:49.674 fused_ordering(1000) 00:11:49.674 fused_ordering(1001) 00:11:49.674 fused_ordering(1002) 00:11:49.674 fused_ordering(1003) 00:11:49.674 fused_ordering(1004) 00:11:49.674 fused_ordering(1005) 00:11:49.674 fused_ordering(1006) 00:11:49.674 fused_ordering(1007) 00:11:49.674 fused_ordering(1008) 00:11:49.674 fused_ordering(1009) 00:11:49.674 fused_ordering(1010) 00:11:49.674 fused_ordering(1011) 00:11:49.674 fused_ordering(1012) 00:11:49.674 fused_ordering(1013) 00:11:49.674 fused_ordering(1014) 00:11:49.674 fused_ordering(1015) 00:11:49.674 fused_ordering(1016) 00:11:49.674 fused_ordering(1017) 00:11:49.674 fused_ordering(1018) 00:11:49.674 fused_ordering(1019) 00:11:49.674 fused_ordering(1020) 00:11:49.674 fused_ordering(1021) 00:11:49.674 fused_ordering(1022) 00:11:49.674 fused_ordering(1023) 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:49.674 rmmod nvme_rdma 00:11:49.674 rmmod nvme_fabrics 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3180899 ']' 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3180899 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3180899 ']' 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3180899 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3180899 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3180899' 00:11:49.674 killing process with pid 3180899 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3180899 00:11:49.674 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3180899 00:11:49.933 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.933 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:49.933 00:11:49.933 real 0m7.508s 00:11:49.933 user 0m3.939s 00:11:49.933 sys 0m4.729s 00:11:49.933 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.933 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.933 ************************************ 00:11:49.933 END TEST nvmf_fused_ordering 00:11:49.933 ************************************ 00:11:49.933 11:50:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:49.933 11:50:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:49.933 11:50:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.933 11:50:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.933 ************************************ 00:11:49.933 START TEST nvmf_ns_masking 00:11:49.933 ************************************ 00:11:49.933 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:50.193 * Looking for test storage... 00:11:50.193 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:50.193 11:50:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:50.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.193 --rc genhtml_branch_coverage=1 00:11:50.193 --rc genhtml_function_coverage=1 00:11:50.193 --rc genhtml_legend=1 00:11:50.193 --rc geninfo_all_blocks=1 00:11:50.193 --rc geninfo_unexecuted_blocks=1 00:11:50.193 00:11:50.193 ' 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:50.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.193 --rc genhtml_branch_coverage=1 00:11:50.193 --rc genhtml_function_coverage=1 00:11:50.193 --rc genhtml_legend=1 00:11:50.193 --rc geninfo_all_blocks=1 00:11:50.193 --rc geninfo_unexecuted_blocks=1 00:11:50.193 00:11:50.193 ' 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:50.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.193 --rc genhtml_branch_coverage=1 00:11:50.193 --rc genhtml_function_coverage=1 00:11:50.193 --rc genhtml_legend=1 00:11:50.193 --rc geninfo_all_blocks=1 00:11:50.193 --rc geninfo_unexecuted_blocks=1 00:11:50.193 00:11:50.193 ' 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:50.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.193 --rc genhtml_branch_coverage=1 00:11:50.193 --rc genhtml_function_coverage=1 00:11:50.193 --rc genhtml_legend=1 00:11:50.193 --rc geninfo_all_blocks=1 00:11:50.193 --rc geninfo_unexecuted_blocks=1 00:11:50.193 00:11:50.193 ' 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.193 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6660cbc4-e7ee-477d-9a05-d591579c8b0e 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ae278320-1c56-4f31-ab87-d00dbec756d2 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=026b5a52-cbf9-4e27-9283-ab9d35fb038b 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.194 11:50:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:56.764 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:56.764 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:56.764 Found net devices under 0000:da:00.0: mlx_0_0 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:56.764 Found net devices under 0000:da:00.1: mlx_0_1 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:56.764 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:56.765 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:56.765 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:56.765 altname enp218s0f0np0 00:11:56.765 altname ens818f0np0 00:11:56.765 inet 192.168.100.8/24 scope global mlx_0_0 00:11:56.765 valid_lft forever preferred_lft forever 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:56.765 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:56.765 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:56.765 altname enp218s0f1np1 00:11:56.765 altname ens818f1np1 00:11:56.765 inet 192.168.100.9/24 scope global mlx_0_1 00:11:56.765 valid_lft forever preferred_lft forever 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:56.765 192.168.100.9' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:56.765 192.168.100.9' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:56.765 192.168.100.9' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:56.765 11:51:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3184353 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3184353 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3184353 ']' 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.765 [2024-12-09 11:51:04.057604] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:11:56.765 [2024-12-09 11:51:04.057651] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.765 [2024-12-09 11:51:04.135916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.765 [2024-12-09 11:51:04.176477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.765 [2024-12-09 11:51:04.176514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.765 [2024-12-09 11:51:04.176520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.765 [2024-12-09 11:51:04.176526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.765 [2024-12-09 11:51:04.176532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.765 [2024-12-09 11:51:04.177116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.765 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.766 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.766 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:56.766 [2024-12-09 11:51:04.518158] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x998620/0x99cb10) succeed. 00:11:56.766 [2024-12-09 11:51:04.527297] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x999ad0/0x9de1b0) succeed. 00:11:56.766 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:56.766 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:56.766 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:56.766 Malloc1 00:11:56.766 11:51:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:57.024 Malloc2 00:11:57.024 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.282 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:57.540 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:57.540 [2024-12-09 11:51:05.583924] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:57.797 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:57.797 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 026b5a52-cbf9-4e27-9283-ab9d35fb038b -a 192.168.100.8 -s 4420 -i 4 00:11:58.054 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.054 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:58.054 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.054 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:58.054 11:51:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.957 11:51:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:59.957 [ 0]:0x1 00:11:59.957 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:59.957 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.216 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66ebeae0f7394775bf76f5e5dcff47cb 00:12:00.216 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66ebeae0f7394775bf76f5e5dcff47cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.216 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:00.216 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:00.216 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.216 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.216 [ 0]:0x1 00:12:00.216 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66ebeae0f7394775bf76f5e5dcff47cb 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66ebeae0f7394775bf76f5e5dcff47cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.474 [ 1]:0x2 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=376235a7db8140aabf74ad7641e101e6 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 376235a7db8140aabf74ad7641e101e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:00.474 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.733 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.994 11:51:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:01.252 11:51:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:01.252 11:51:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 026b5a52-cbf9-4e27-9283-ab9d35fb038b -a 192.168.100.8 -s 4420 -i 4 00:12:01.508 11:51:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:01.508 11:51:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:01.508 11:51:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.508 11:51:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:01.509 11:51:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:01.509 11:51:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.409 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.409 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.409 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.409 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.409 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.409 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:03.409 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:03.409 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.668 [ 0]:0x2 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=376235a7db8140aabf74ad7641e101e6 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 376235a7db8140aabf74ad7641e101e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.668 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.927 [ 0]:0x1 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66ebeae0f7394775bf76f5e5dcff47cb 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66ebeae0f7394775bf76f5e5dcff47cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.927 [ 1]:0x2 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=376235a7db8140aabf74ad7641e101e6 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 376235a7db8140aabf74ad7641e101e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.927 11:51:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.185 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:04.186 [ 0]:0x2 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=376235a7db8140aabf74ad7641e101e6 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 376235a7db8140aabf74ad7641e101e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:04.186 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.444 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:04.702 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:04.702 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 026b5a52-cbf9-4e27-9283-ab9d35fb038b -a 192.168.100.8 -s 4420 -i 4 00:12:04.961 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:04.961 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:04.961 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.961 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:04.961 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:04.961 11:51:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:07.493 11:51:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:07.493 11:51:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:07.493 11:51:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.493 [ 0]:0x1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66ebeae0f7394775bf76f5e5dcff47cb 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66ebeae0f7394775bf76f5e5dcff47cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:07.493 [ 1]:0x2 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=376235a7db8140aabf74ad7641e101e6 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 376235a7db8140aabf74ad7641e101e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:07.493 [ 0]:0x2 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=376235a7db8140aabf74ad7641e101e6 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 376235a7db8140aabf74ad7641e101e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:07.493 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:07.752 [2024-12-09 11:51:15.629702] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:07.752 request: 00:12:07.752 { 00:12:07.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:07.752 "nsid": 2, 00:12:07.752 "host": "nqn.2016-06.io.spdk:host1", 00:12:07.752 "method": "nvmf_ns_remove_host", 00:12:07.752 "req_id": 1 00:12:07.752 } 00:12:07.752 Got JSON-RPC error response 00:12:07.752 response: 00:12:07.752 { 00:12:07.752 "code": -32602, 00:12:07.752 "message": "Invalid parameters" 00:12:07.752 } 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:07.752 [ 0]:0x2 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=376235a7db8140aabf74ad7641e101e6 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 376235a7db8140aabf74ad7641e101e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:07.752 11:51:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3186955 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3186955 /var/tmp/host.sock 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3186955 ']' 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:08.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.319 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:08.319 [2024-12-09 11:51:16.118409] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:12:08.319 [2024-12-09 11:51:16.118454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186955 ] 00:12:08.319 [2024-12-09 11:51:16.195907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.319 [2024-12-09 11:51:16.235962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.255 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.255 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:09.255 11:51:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.255 11:51:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:09.514 11:51:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6660cbc4-e7ee-477d-9a05-d591579c8b0e 00:12:09.514 11:51:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:09.514 11:51:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6660CBC4E7EE477D9A05D591579C8B0E -i 00:12:09.514 11:51:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ae278320-1c56-4f31-ab87-d00dbec756d2 00:12:09.514 11:51:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:09.514 11:51:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g AE2783201C564F31AB87D00DBEC756D2 -i 00:12:09.773 11:51:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.032 11:51:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:10.291 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:10.291 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:10.550 nvme0n1 00:12:10.550 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:10.550 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:10.808 nvme1n2 00:12:10.808 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:10.808 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:10.808 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:10.808 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:10.808 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:11.066 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:11.066 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:11.066 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:11.066 11:51:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:11.066 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6660cbc4-e7ee-477d-9a05-d591579c8b0e == \6\6\6\0\c\b\c\4\-\e\7\e\e\-\4\7\7\d\-\9\a\0\5\-\d\5\9\1\5\7\9\c\8\b\0\e ]] 00:12:11.066 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:11.066 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:11.066 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:11.324 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ae278320-1c56-4f31-ab87-d00dbec756d2 == \a\e\2\7\8\3\2\0\-\1\c\5\6\-\4\f\3\1\-\a\b\8\7\-\d\0\0\d\b\e\c\7\5\6\d\2 ]] 00:12:11.324 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.583 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 6660cbc4-e7ee-477d-9a05-d591579c8b0e 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6660CBC4E7EE477D9A05D591579C8B0E 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6660CBC4E7EE477D9A05D591579C8B0E 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:11.842 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6660CBC4E7EE477D9A05D591579C8B0E 00:12:11.842 [2024-12-09 11:51:19.880184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:11.842 [2024-12-09 11:51:19.880219] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:11.842 [2024-12-09 11:51:19.880227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.842 request: 00:12:11.842 { 00:12:11.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.842 "namespace": { 00:12:11.842 "bdev_name": "invalid", 00:12:11.842 "nsid": 1, 00:12:11.842 "nguid": "6660CBC4E7EE477D9A05D591579C8B0E", 00:12:11.842 "no_auto_visible": false, 00:12:11.842 "hide_metadata": false 00:12:11.842 }, 00:12:11.842 "method": "nvmf_subsystem_add_ns", 00:12:11.842 "req_id": 1 00:12:11.842 } 00:12:11.842 Got JSON-RPC error response 00:12:11.842 response: 00:12:11.842 { 00:12:11.842 "code": -32602, 00:12:11.842 "message": "Invalid parameters" 00:12:11.842 } 00:12:12.101 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:12.101 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:12.101 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:12.101 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:12.101 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 6660cbc4-e7ee-477d-9a05-d591579c8b0e 00:12:12.101 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:12.101 11:51:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6660CBC4E7EE477D9A05D591579C8B0E -i 00:12:12.101 11:51:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3186955 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3186955 ']' 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3186955 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3186955 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3186955' 00:12:14.634 killing process with pid 3186955 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3186955 00:12:14.634 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3186955 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:14.893 rmmod nvme_rdma 00:12:14.893 rmmod nvme_fabrics 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3184353 ']' 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3184353 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3184353 ']' 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3184353 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.893 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3184353 00:12:15.152 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.152 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.152 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3184353' 00:12:15.152 killing process with pid 3184353 00:12:15.152 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3184353 00:12:15.152 11:51:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3184353 00:12:15.410 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.410 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:15.410 00:12:15.410 real 0m25.302s 00:12:15.410 user 0m33.663s 00:12:15.411 sys 0m6.458s 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:15.411 ************************************ 00:12:15.411 END TEST nvmf_ns_masking 00:12:15.411 ************************************ 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.411 ************************************ 00:12:15.411 START TEST nvmf_nvme_cli 00:12:15.411 ************************************ 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:15.411 * Looking for test storage... 00:12:15.411 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.411 --rc genhtml_branch_coverage=1 00:12:15.411 --rc genhtml_function_coverage=1 00:12:15.411 --rc genhtml_legend=1 00:12:15.411 --rc geninfo_all_blocks=1 00:12:15.411 --rc geninfo_unexecuted_blocks=1 00:12:15.411 00:12:15.411 ' 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.411 --rc genhtml_branch_coverage=1 00:12:15.411 --rc genhtml_function_coverage=1 00:12:15.411 --rc genhtml_legend=1 00:12:15.411 --rc geninfo_all_blocks=1 00:12:15.411 --rc geninfo_unexecuted_blocks=1 00:12:15.411 00:12:15.411 ' 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.411 --rc genhtml_branch_coverage=1 00:12:15.411 --rc genhtml_function_coverage=1 00:12:15.411 --rc genhtml_legend=1 00:12:15.411 --rc geninfo_all_blocks=1 00:12:15.411 --rc geninfo_unexecuted_blocks=1 00:12:15.411 00:12:15.411 ' 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.411 --rc genhtml_branch_coverage=1 00:12:15.411 --rc genhtml_function_coverage=1 00:12:15.411 --rc genhtml_legend=1 00:12:15.411 --rc geninfo_all_blocks=1 00:12:15.411 --rc geninfo_unexecuted_blocks=1 00:12:15.411 00:12:15.411 ' 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.411 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.671 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.671 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.672 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.672 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.672 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.672 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.672 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.672 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.672 11:51:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:22.240 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:22.240 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:22.240 Found net devices under 0000:da:00.0: mlx_0_0 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:22.240 Found net devices under 0000:da:00.1: mlx_0_1 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:22.240 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:22.241 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.241 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:12:22.241 altname enp218s0f0np0 00:12:22.241 altname ens818f0np0 00:12:22.241 inet 192.168.100.8/24 scope global mlx_0_0 00:12:22.241 valid_lft forever preferred_lft forever 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:22.241 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.241 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:12:22.241 altname enp218s0f1np1 00:12:22.241 altname ens818f1np1 00:12:22.241 inet 192.168.100.9/24 scope global mlx_0_1 00:12:22.241 valid_lft forever preferred_lft forever 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:22.241 192.168.100.9' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:22.241 192.168.100.9' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:22.241 192.168.100.9' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3191226 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3191226 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3191226 ']' 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.241 [2024-12-09 11:51:29.320693] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:12:22.241 [2024-12-09 11:51:29.320753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.241 [2024-12-09 11:51:29.400185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.241 [2024-12-09 11:51:29.442718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.241 [2024-12-09 11:51:29.442759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.241 [2024-12-09 11:51:29.442768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.241 [2024-12-09 11:51:29.442774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.241 [2024-12-09 11:51:29.442779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.241 [2024-12-09 11:51:29.444235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.241 [2024-12-09 11:51:29.444344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.241 [2024-12-09 11:51:29.444450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.241 [2024-12-09 11:51:29.444451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.241 [2024-12-09 11:51:29.613402] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x74a940/0x74ee30) succeed. 00:12:22.241 [2024-12-09 11:51:29.624772] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x74bfd0/0x7904d0) succeed. 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:22.241 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.242 Malloc0 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.242 Malloc1 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.242 [2024-12-09 11:51:29.838800] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:12:22.242 00:12:22.242 Discovery Log Number of Records 2, Generation counter 2 00:12:22.242 =====Discovery Log Entry 0====== 00:12:22.242 trtype: rdma 00:12:22.242 adrfam: ipv4 00:12:22.242 subtype: current discovery subsystem 00:12:22.242 treq: not required 00:12:22.242 portid: 0 00:12:22.242 trsvcid: 4420 00:12:22.242 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.242 traddr: 192.168.100.8 00:12:22.242 eflags: explicit discovery connections, duplicate discovery information 00:12:22.242 rdma_prtype: not specified 00:12:22.242 rdma_qptype: connected 00:12:22.242 rdma_cms: rdma-cm 00:12:22.242 rdma_pkey: 0x0000 00:12:22.242 =====Discovery Log Entry 1====== 00:12:22.242 trtype: rdma 00:12:22.242 adrfam: ipv4 00:12:22.242 subtype: nvme subsystem 00:12:22.242 treq: not required 00:12:22.242 portid: 0 00:12:22.242 trsvcid: 4420 00:12:22.242 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:22.242 traddr: 192.168.100.8 00:12:22.242 eflags: none 00:12:22.242 rdma_prtype: not specified 00:12:22.242 rdma_qptype: connected 00:12:22.242 rdma_cms: rdma-cm 00:12:22.242 rdma_pkey: 0x0000 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:22.242 11:51:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:23.178 11:51:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:23.178 11:51:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:23.178 11:51:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.178 11:51:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:23.178 11:51:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:23.178 11:51:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:25.082 /dev/nvme0n2 ]] 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:25.082 11:51:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.018 11:51:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:26.018 rmmod nvme_rdma 00:12:26.018 rmmod nvme_fabrics 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3191226 ']' 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3191226 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3191226 ']' 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3191226 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.018 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3191226 00:12:26.276 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.276 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.276 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3191226' 00:12:26.276 killing process with pid 3191226 00:12:26.276 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3191226 00:12:26.276 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3191226 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:26.536 00:12:26.536 real 0m11.094s 00:12:26.536 user 0m21.280s 00:12:26.536 sys 0m4.899s 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.536 ************************************ 00:12:26.536 END TEST nvmf_nvme_cli 00:12:26.536 ************************************ 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.536 ************************************ 00:12:26.536 START TEST nvmf_auth_target 00:12:26.536 ************************************ 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:12:26.536 * Looking for test storage... 00:12:26.536 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:26.536 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.796 --rc genhtml_branch_coverage=1 00:12:26.796 --rc genhtml_function_coverage=1 00:12:26.796 --rc genhtml_legend=1 00:12:26.796 --rc geninfo_all_blocks=1 00:12:26.796 --rc geninfo_unexecuted_blocks=1 00:12:26.796 00:12:26.796 ' 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.796 --rc genhtml_branch_coverage=1 00:12:26.796 --rc genhtml_function_coverage=1 00:12:26.796 --rc genhtml_legend=1 00:12:26.796 --rc geninfo_all_blocks=1 00:12:26.796 --rc geninfo_unexecuted_blocks=1 00:12:26.796 00:12:26.796 ' 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.796 --rc genhtml_branch_coverage=1 00:12:26.796 --rc genhtml_function_coverage=1 00:12:26.796 --rc genhtml_legend=1 00:12:26.796 --rc geninfo_all_blocks=1 00:12:26.796 --rc geninfo_unexecuted_blocks=1 00:12:26.796 00:12:26.796 ' 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.796 --rc genhtml_branch_coverage=1 00:12:26.796 --rc genhtml_function_coverage=1 00:12:26.796 --rc genhtml_legend=1 00:12:26.796 --rc geninfo_all_blocks=1 00:12:26.796 --rc geninfo_unexecuted_blocks=1 00:12:26.796 00:12:26.796 ' 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.796 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.797 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:26.797 11:51:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:33.369 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:33.369 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:33.369 Found net devices under 0000:da:00.0: mlx_0_0 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:33.369 Found net devices under 0000:da:00.1: mlx_0_1 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:33.369 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:33.370 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:33.370 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:12:33.370 altname enp218s0f0np0 00:12:33.370 altname ens818f0np0 00:12:33.370 inet 192.168.100.8/24 scope global mlx_0_0 00:12:33.370 valid_lft forever preferred_lft forever 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:33.370 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:33.370 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:12:33.370 altname enp218s0f1np1 00:12:33.370 altname ens818f1np1 00:12:33.370 inet 192.168.100.9/24 scope global mlx_0_1 00:12:33.370 valid_lft forever preferred_lft forever 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:33.370 192.168.100.9' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:33.370 192.168.100.9' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:33.370 192.168.100.9' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3195255 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3195255 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3195255 ']' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3195408 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:33.370 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3e084cbbb353d356d5074169daf2b95c4e86df096a65f654 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.p6B 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3e084cbbb353d356d5074169daf2b95c4e86df096a65f654 0 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3e084cbbb353d356d5074169daf2b95c4e86df096a65f654 0 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3e084cbbb353d356d5074169daf2b95c4e86df096a65f654 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.p6B 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.p6B 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.p6B 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=946fd9f358971dffeefbbbe69bd8252602429205dfdf3acd95f642cfefee62bf 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.XBW 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 946fd9f358971dffeefbbbe69bd8252602429205dfdf3acd95f642cfefee62bf 3 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 946fd9f358971dffeefbbbe69bd8252602429205dfdf3acd95f642cfefee62bf 3 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=946fd9f358971dffeefbbbe69bd8252602429205dfdf3acd95f642cfefee62bf 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.XBW 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.XBW 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.XBW 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fb4a4ada783debed79697677b6558191 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.AXM 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fb4a4ada783debed79697677b6558191 1 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fb4a4ada783debed79697677b6558191 1 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fb4a4ada783debed79697677b6558191 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:33.371 11:51:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.AXM 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.AXM 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.AXM 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c86f49496e795e0b89a820a1a6d879e95c09d75c2939f04d 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Fol 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c86f49496e795e0b89a820a1a6d879e95c09d75c2939f04d 2 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c86f49496e795e0b89a820a1a6d879e95c09d75c2939f04d 2 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c86f49496e795e0b89a820a1a6d879e95c09d75c2939f04d 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Fol 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Fol 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Fol 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dc190f42ecff5becbc3c53eac33960f99bf76342466ae409 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.TcP 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dc190f42ecff5becbc3c53eac33960f99bf76342466ae409 2 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dc190f42ecff5becbc3c53eac33960f99bf76342466ae409 2 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dc190f42ecff5becbc3c53eac33960f99bf76342466ae409 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.TcP 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.TcP 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.TcP 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5e3dd356dd20db632c710d619b6b70fc 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iJa 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5e3dd356dd20db632c710d619b6b70fc 1 00:12:33.371 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5e3dd356dd20db632c710d619b6b70fc 1 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5e3dd356dd20db632c710d619b6b70fc 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iJa 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iJa 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.iJa 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a4f44a117acf3c22626434b83ca0658f70b09d0c78271b303dc4ce66f5e5cff5 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Crx 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a4f44a117acf3c22626434b83ca0658f70b09d0c78271b303dc4ce66f5e5cff5 3 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a4f44a117acf3c22626434b83ca0658f70b09d0c78271b303dc4ce66f5e5cff5 3 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a4f44a117acf3c22626434b83ca0658f70b09d0c78271b303dc4ce66f5e5cff5 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Crx 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Crx 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Crx 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3195255 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3195255 ']' 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.372 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3195408 /var/tmp/host.sock 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3195408 ']' 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:33.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.631 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.889 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.889 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:33.889 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.p6B 00:12:33.889 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.889 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.889 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.889 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.p6B 00:12:33.889 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.p6B 00:12:34.148 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.XBW ]] 00:12:34.148 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XBW 00:12:34.148 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.148 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.148 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.148 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XBW 00:12:34.148 11:51:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XBW 00:12:34.148 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:34.148 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AXM 00:12:34.148 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.148 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.148 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.148 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.AXM 00:12:34.148 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.AXM 00:12:34.407 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Fol ]] 00:12:34.407 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fol 00:12:34.407 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.407 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.407 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.407 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fol 00:12:34.407 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fol 00:12:34.665 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:34.665 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.TcP 00:12:34.665 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.665 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.665 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.665 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.TcP 00:12:34.665 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.TcP 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.iJa ]] 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iJa 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iJa 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iJa 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Crx 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Crx 00:12:34.924 11:51:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Crx 00:12:35.183 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:35.183 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:35.183 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.183 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.183 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:35.183 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.442 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.700 00:12:35.700 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.700 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.700 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.959 { 00:12:35.959 "cntlid": 1, 00:12:35.959 "qid": 0, 00:12:35.959 "state": "enabled", 00:12:35.959 "thread": "nvmf_tgt_poll_group_000", 00:12:35.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:35.959 "listen_address": { 00:12:35.959 "trtype": "RDMA", 00:12:35.959 "adrfam": "IPv4", 00:12:35.959 "traddr": "192.168.100.8", 00:12:35.959 "trsvcid": "4420" 00:12:35.959 }, 00:12:35.959 "peer_address": { 00:12:35.959 "trtype": "RDMA", 00:12:35.959 "adrfam": "IPv4", 00:12:35.959 "traddr": "192.168.100.8", 00:12:35.959 "trsvcid": "42655" 00:12:35.959 }, 00:12:35.959 "auth": { 00:12:35.959 "state": "completed", 00:12:35.959 "digest": "sha256", 00:12:35.959 "dhgroup": "null" 00:12:35.959 } 00:12:35.959 } 00:12:35.959 ]' 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.959 11:51:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.218 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:12:36.218 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:12:36.785 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.044 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.044 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.044 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.044 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.044 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.044 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:37.044 11:51:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.044 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.303 00:12:37.303 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.303 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.303 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.562 { 00:12:37.562 "cntlid": 3, 00:12:37.562 "qid": 0, 00:12:37.562 "state": "enabled", 00:12:37.562 "thread": "nvmf_tgt_poll_group_000", 00:12:37.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:37.562 "listen_address": { 00:12:37.562 "trtype": "RDMA", 00:12:37.562 "adrfam": "IPv4", 00:12:37.562 "traddr": "192.168.100.8", 00:12:37.562 "trsvcid": "4420" 00:12:37.562 }, 00:12:37.562 "peer_address": { 00:12:37.562 "trtype": "RDMA", 00:12:37.562 "adrfam": "IPv4", 00:12:37.562 "traddr": "192.168.100.8", 00:12:37.562 "trsvcid": "34987" 00:12:37.562 }, 00:12:37.562 "auth": { 00:12:37.562 "state": "completed", 00:12:37.562 "digest": "sha256", 00:12:37.562 "dhgroup": "null" 00:12:37.562 } 00:12:37.562 } 00:12:37.562 ]' 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:37.562 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.820 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.820 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.820 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.820 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:12:37.820 11:51:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.755 11:51:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.074 00:12:39.074 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.074 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.074 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.338 { 00:12:39.338 "cntlid": 5, 00:12:39.338 "qid": 0, 00:12:39.338 "state": "enabled", 00:12:39.338 "thread": "nvmf_tgt_poll_group_000", 00:12:39.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:39.338 "listen_address": { 00:12:39.338 "trtype": "RDMA", 00:12:39.338 "adrfam": "IPv4", 00:12:39.338 "traddr": "192.168.100.8", 00:12:39.338 "trsvcid": "4420" 00:12:39.338 }, 00:12:39.338 "peer_address": { 00:12:39.338 "trtype": "RDMA", 00:12:39.338 "adrfam": "IPv4", 00:12:39.338 "traddr": "192.168.100.8", 00:12:39.338 "trsvcid": "51480" 00:12:39.338 }, 00:12:39.338 "auth": { 00:12:39.338 "state": "completed", 00:12:39.338 "digest": "sha256", 00:12:39.338 "dhgroup": "null" 00:12:39.338 } 00:12:39.338 } 00:12:39.338 ]' 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:39.338 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.625 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.625 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.625 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.625 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:12:39.625 11:51:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:12:40.242 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.519 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:40.786 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.786 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.786 00:12:40.786 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.786 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.786 11:51:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.052 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.052 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.052 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.052 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.052 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.052 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.052 { 00:12:41.052 "cntlid": 7, 00:12:41.052 "qid": 0, 00:12:41.052 "state": "enabled", 00:12:41.052 "thread": "nvmf_tgt_poll_group_000", 00:12:41.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:41.052 "listen_address": { 00:12:41.052 "trtype": "RDMA", 00:12:41.052 "adrfam": "IPv4", 00:12:41.052 "traddr": "192.168.100.8", 00:12:41.052 "trsvcid": "4420" 00:12:41.052 }, 00:12:41.052 "peer_address": { 00:12:41.052 "trtype": "RDMA", 00:12:41.052 "adrfam": "IPv4", 00:12:41.052 "traddr": "192.168.100.8", 00:12:41.052 "trsvcid": "50179" 00:12:41.052 }, 00:12:41.052 "auth": { 00:12:41.052 "state": "completed", 00:12:41.052 "digest": "sha256", 00:12:41.052 "dhgroup": "null" 00:12:41.052 } 00:12:41.052 } 00:12:41.052 ]' 00:12:41.052 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.052 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:41.319 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.319 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:41.319 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.319 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.319 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.319 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.594 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:12:41.594 11:51:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:12:42.175 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.175 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:42.175 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.175 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.175 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.175 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:42.175 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.175 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:42.175 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.442 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.715 00:12:42.715 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.715 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.715 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.990 { 00:12:42.990 "cntlid": 9, 00:12:42.990 "qid": 0, 00:12:42.990 "state": "enabled", 00:12:42.990 "thread": "nvmf_tgt_poll_group_000", 00:12:42.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:42.990 "listen_address": { 00:12:42.990 "trtype": "RDMA", 00:12:42.990 "adrfam": "IPv4", 00:12:42.990 "traddr": "192.168.100.8", 00:12:42.990 "trsvcid": "4420" 00:12:42.990 }, 00:12:42.990 "peer_address": { 00:12:42.990 "trtype": "RDMA", 00:12:42.990 "adrfam": "IPv4", 00:12:42.990 "traddr": "192.168.100.8", 00:12:42.990 "trsvcid": "50467" 00:12:42.990 }, 00:12:42.990 "auth": { 00:12:42.990 "state": "completed", 00:12:42.990 "digest": "sha256", 00:12:42.990 "dhgroup": "ffdhe2048" 00:12:42.990 } 00:12:42.990 } 00:12:42.990 ]' 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.990 11:51:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.259 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:12:43.259 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:12:43.850 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.850 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:43.850 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.850 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.129 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.129 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.129 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:44.129 11:51:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.129 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.413 00:12:44.413 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.413 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.413 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.697 { 00:12:44.697 "cntlid": 11, 00:12:44.697 "qid": 0, 00:12:44.697 "state": "enabled", 00:12:44.697 "thread": "nvmf_tgt_poll_group_000", 00:12:44.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:44.697 "listen_address": { 00:12:44.697 "trtype": "RDMA", 00:12:44.697 "adrfam": "IPv4", 00:12:44.697 "traddr": "192.168.100.8", 00:12:44.697 "trsvcid": "4420" 00:12:44.697 }, 00:12:44.697 "peer_address": { 00:12:44.697 "trtype": "RDMA", 00:12:44.697 "adrfam": "IPv4", 00:12:44.697 "traddr": "192.168.100.8", 00:12:44.697 "trsvcid": "37685" 00:12:44.697 }, 00:12:44.697 "auth": { 00:12:44.697 "state": "completed", 00:12:44.697 "digest": "sha256", 00:12:44.697 "dhgroup": "ffdhe2048" 00:12:44.697 } 00:12:44.697 } 00:12:44.697 ]' 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.697 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.978 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:12:44.978 11:51:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:12:45.580 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.858 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:45.858 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.858 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.858 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.858 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.858 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:45.858 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:45.858 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:45.858 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.859 11:51:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.126 00:12:46.127 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.127 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.127 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.391 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.391 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.391 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.391 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.391 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.391 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.391 { 00:12:46.391 "cntlid": 13, 00:12:46.391 "qid": 0, 00:12:46.391 "state": "enabled", 00:12:46.391 "thread": "nvmf_tgt_poll_group_000", 00:12:46.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:46.391 "listen_address": { 00:12:46.391 "trtype": "RDMA", 00:12:46.392 "adrfam": "IPv4", 00:12:46.392 "traddr": "192.168.100.8", 00:12:46.392 "trsvcid": "4420" 00:12:46.392 }, 00:12:46.392 "peer_address": { 00:12:46.392 "trtype": "RDMA", 00:12:46.392 "adrfam": "IPv4", 00:12:46.392 "traddr": "192.168.100.8", 00:12:46.392 "trsvcid": "44711" 00:12:46.392 }, 00:12:46.392 "auth": { 00:12:46.392 "state": "completed", 00:12:46.392 "digest": "sha256", 00:12:46.392 "dhgroup": "ffdhe2048" 00:12:46.392 } 00:12:46.392 } 00:12:46.392 ]' 00:12:46.392 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.392 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.392 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.392 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:46.392 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.658 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.658 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.658 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.658 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:12:46.658 11:51:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:12:47.273 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.542 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:47.542 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.542 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.542 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.542 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.542 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:47.542 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.817 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.091 00:12:48.091 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.091 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.091 11:51:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.091 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.091 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.091 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.091 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.091 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.091 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.091 { 00:12:48.091 "cntlid": 15, 00:12:48.091 "qid": 0, 00:12:48.091 "state": "enabled", 00:12:48.091 "thread": "nvmf_tgt_poll_group_000", 00:12:48.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:48.091 "listen_address": { 00:12:48.091 "trtype": "RDMA", 00:12:48.091 "adrfam": "IPv4", 00:12:48.091 "traddr": "192.168.100.8", 00:12:48.091 "trsvcid": "4420" 00:12:48.091 }, 00:12:48.091 "peer_address": { 00:12:48.091 "trtype": "RDMA", 00:12:48.091 "adrfam": "IPv4", 00:12:48.091 "traddr": "192.168.100.8", 00:12:48.091 "trsvcid": "50145" 00:12:48.091 }, 00:12:48.091 "auth": { 00:12:48.091 "state": "completed", 00:12:48.091 "digest": "sha256", 00:12:48.091 "dhgroup": "ffdhe2048" 00:12:48.091 } 00:12:48.091 } 00:12:48.091 ]' 00:12:48.091 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.368 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.368 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.368 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:48.368 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.368 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.368 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.368 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.641 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:12:48.641 11:51:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:12:49.230 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.230 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:49.230 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.230 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.230 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.230 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:49.230 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.230 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:49.230 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.499 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.770 00:12:49.770 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.770 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.770 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.044 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.044 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.044 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.045 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.045 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.045 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.045 { 00:12:50.045 "cntlid": 17, 00:12:50.045 "qid": 0, 00:12:50.045 "state": "enabled", 00:12:50.045 "thread": "nvmf_tgt_poll_group_000", 00:12:50.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:50.045 "listen_address": { 00:12:50.045 "trtype": "RDMA", 00:12:50.045 "adrfam": "IPv4", 00:12:50.045 "traddr": "192.168.100.8", 00:12:50.045 "trsvcid": "4420" 00:12:50.045 }, 00:12:50.045 "peer_address": { 00:12:50.045 "trtype": "RDMA", 00:12:50.045 "adrfam": "IPv4", 00:12:50.045 "traddr": "192.168.100.8", 00:12:50.045 "trsvcid": "56434" 00:12:50.045 }, 00:12:50.045 "auth": { 00:12:50.045 "state": "completed", 00:12:50.045 "digest": "sha256", 00:12:50.045 "dhgroup": "ffdhe3072" 00:12:50.045 } 00:12:50.045 } 00:12:50.045 ]' 00:12:50.045 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.045 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:50.045 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.045 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:50.045 11:51:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.045 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.045 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.045 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.317 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:12:50.317 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:12:50.918 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.918 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:50.918 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.918 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.918 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.918 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.918 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:50.918 11:51:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.194 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.468 00:12:51.468 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.469 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.469 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.746 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.746 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.746 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.746 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.746 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.746 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.746 { 00:12:51.746 "cntlid": 19, 00:12:51.746 "qid": 0, 00:12:51.746 "state": "enabled", 00:12:51.746 "thread": "nvmf_tgt_poll_group_000", 00:12:51.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:51.746 "listen_address": { 00:12:51.746 "trtype": "RDMA", 00:12:51.746 "adrfam": "IPv4", 00:12:51.747 "traddr": "192.168.100.8", 00:12:51.747 "trsvcid": "4420" 00:12:51.747 }, 00:12:51.747 "peer_address": { 00:12:51.747 "trtype": "RDMA", 00:12:51.747 "adrfam": "IPv4", 00:12:51.747 "traddr": "192.168.100.8", 00:12:51.747 "trsvcid": "53680" 00:12:51.747 }, 00:12:51.747 "auth": { 00:12:51.747 "state": "completed", 00:12:51.747 "digest": "sha256", 00:12:51.747 "dhgroup": "ffdhe3072" 00:12:51.747 } 00:12:51.747 } 00:12:51.747 ]' 00:12:51.747 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.747 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.747 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.747 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.747 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.747 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.747 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.747 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.020 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:12:52.021 11:51:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:12:52.628 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.903 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.179 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.179 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.179 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.179 11:52:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.179 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.457 { 00:12:53.457 "cntlid": 21, 00:12:53.457 "qid": 0, 00:12:53.457 "state": "enabled", 00:12:53.457 "thread": "nvmf_tgt_poll_group_000", 00:12:53.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:53.457 "listen_address": { 00:12:53.457 "trtype": "RDMA", 00:12:53.457 "adrfam": "IPv4", 00:12:53.457 "traddr": "192.168.100.8", 00:12:53.457 "trsvcid": "4420" 00:12:53.457 }, 00:12:53.457 "peer_address": { 00:12:53.457 "trtype": "RDMA", 00:12:53.457 "adrfam": "IPv4", 00:12:53.457 "traddr": "192.168.100.8", 00:12:53.457 "trsvcid": "42451" 00:12:53.457 }, 00:12:53.457 "auth": { 00:12:53.457 "state": "completed", 00:12:53.457 "digest": "sha256", 00:12:53.457 "dhgroup": "ffdhe3072" 00:12:53.457 } 00:12:53.457 } 00:12:53.457 ]' 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.457 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.742 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.742 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.742 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.742 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.742 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.016 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:12:54.016 11:52:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:12:54.597 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.597 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:54.597 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.597 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.597 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.597 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.597 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:54.597 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.865 11:52:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.155 00:12:55.155 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.155 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.155 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.432 { 00:12:55.432 "cntlid": 23, 00:12:55.432 "qid": 0, 00:12:55.432 "state": "enabled", 00:12:55.432 "thread": "nvmf_tgt_poll_group_000", 00:12:55.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:55.432 "listen_address": { 00:12:55.432 "trtype": "RDMA", 00:12:55.432 "adrfam": "IPv4", 00:12:55.432 "traddr": "192.168.100.8", 00:12:55.432 "trsvcid": "4420" 00:12:55.432 }, 00:12:55.432 "peer_address": { 00:12:55.432 "trtype": "RDMA", 00:12:55.432 "adrfam": "IPv4", 00:12:55.432 "traddr": "192.168.100.8", 00:12:55.432 "trsvcid": "37239" 00:12:55.432 }, 00:12:55.432 "auth": { 00:12:55.432 "state": "completed", 00:12:55.432 "digest": "sha256", 00:12:55.432 "dhgroup": "ffdhe3072" 00:12:55.432 } 00:12:55.432 } 00:12:55.432 ]' 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.432 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.698 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:12:55.698 11:52:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:12:56.310 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.310 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:56.310 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.310 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.310 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.310 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.310 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.310 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:56.310 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:56.595 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:56.595 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.595 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:56.595 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:56.595 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.595 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.595 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.595 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.596 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.596 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.596 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.596 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.596 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.886 00:12:56.886 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.886 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.886 11:52:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.164 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.164 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.164 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.164 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.164 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.164 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.164 { 00:12:57.164 "cntlid": 25, 00:12:57.164 "qid": 0, 00:12:57.164 "state": "enabled", 00:12:57.164 "thread": "nvmf_tgt_poll_group_000", 00:12:57.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:57.164 "listen_address": { 00:12:57.164 "trtype": "RDMA", 00:12:57.165 "adrfam": "IPv4", 00:12:57.165 "traddr": "192.168.100.8", 00:12:57.165 "trsvcid": "4420" 00:12:57.165 }, 00:12:57.165 "peer_address": { 00:12:57.165 "trtype": "RDMA", 00:12:57.165 "adrfam": "IPv4", 00:12:57.165 "traddr": "192.168.100.8", 00:12:57.165 "trsvcid": "55648" 00:12:57.165 }, 00:12:57.165 "auth": { 00:12:57.165 "state": "completed", 00:12:57.165 "digest": "sha256", 00:12:57.165 "dhgroup": "ffdhe4096" 00:12:57.165 } 00:12:57.165 } 00:12:57.165 ]' 00:12:57.165 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.165 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:57.165 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.165 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:57.165 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.165 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.165 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.165 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.448 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:12:57.449 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:12:58.086 11:52:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.086 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:58.086 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.086 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.086 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.086 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.086 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:58.086 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.350 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.351 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.351 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.610 00:12:58.610 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.610 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.610 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.869 { 00:12:58.869 "cntlid": 27, 00:12:58.869 "qid": 0, 00:12:58.869 "state": "enabled", 00:12:58.869 "thread": "nvmf_tgt_poll_group_000", 00:12:58.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:12:58.869 "listen_address": { 00:12:58.869 "trtype": "RDMA", 00:12:58.869 "adrfam": "IPv4", 00:12:58.869 "traddr": "192.168.100.8", 00:12:58.869 "trsvcid": "4420" 00:12:58.869 }, 00:12:58.869 "peer_address": { 00:12:58.869 "trtype": "RDMA", 00:12:58.869 "adrfam": "IPv4", 00:12:58.869 "traddr": "192.168.100.8", 00:12:58.869 "trsvcid": "58856" 00:12:58.869 }, 00:12:58.869 "auth": { 00:12:58.869 "state": "completed", 00:12:58.869 "digest": "sha256", 00:12:58.869 "dhgroup": "ffdhe4096" 00:12:58.869 } 00:12:58.869 } 00:12:58.869 ]' 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.869 11:52:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.128 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:12:59.129 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:12:59.696 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.955 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:59.956 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.956 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.956 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.956 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.956 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:59.956 11:52:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.215 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.473 00:13:00.473 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.473 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.473 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.732 { 00:13:00.732 "cntlid": 29, 00:13:00.732 "qid": 0, 00:13:00.732 "state": "enabled", 00:13:00.732 "thread": "nvmf_tgt_poll_group_000", 00:13:00.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:00.732 "listen_address": { 00:13:00.732 "trtype": "RDMA", 00:13:00.732 "adrfam": "IPv4", 00:13:00.732 "traddr": "192.168.100.8", 00:13:00.732 "trsvcid": "4420" 00:13:00.732 }, 00:13:00.732 "peer_address": { 00:13:00.732 "trtype": "RDMA", 00:13:00.732 "adrfam": "IPv4", 00:13:00.732 "traddr": "192.168.100.8", 00:13:00.732 "trsvcid": "50978" 00:13:00.732 }, 00:13:00.732 "auth": { 00:13:00.732 "state": "completed", 00:13:00.732 "digest": "sha256", 00:13:00.732 "dhgroup": "ffdhe4096" 00:13:00.732 } 00:13:00.732 } 00:13:00.732 ]' 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.732 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.991 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:00.991 11:52:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:01.559 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.819 11:52:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.386 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.386 { 00:13:02.386 "cntlid": 31, 00:13:02.386 "qid": 0, 00:13:02.386 "state": "enabled", 00:13:02.386 "thread": "nvmf_tgt_poll_group_000", 00:13:02.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:02.386 "listen_address": { 00:13:02.386 "trtype": "RDMA", 00:13:02.386 "adrfam": "IPv4", 00:13:02.386 "traddr": "192.168.100.8", 00:13:02.386 "trsvcid": "4420" 00:13:02.386 }, 00:13:02.386 "peer_address": { 00:13:02.386 "trtype": "RDMA", 00:13:02.386 "adrfam": "IPv4", 00:13:02.386 "traddr": "192.168.100.8", 00:13:02.386 "trsvcid": "41277" 00:13:02.386 }, 00:13:02.386 "auth": { 00:13:02.386 "state": "completed", 00:13:02.386 "digest": "sha256", 00:13:02.386 "dhgroup": "ffdhe4096" 00:13:02.386 } 00:13:02.386 } 00:13:02.386 ]' 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.386 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.645 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:02.645 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.645 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.645 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.645 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.903 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:02.903 11:52:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:03.471 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.471 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:03.471 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.471 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.471 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.471 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.471 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.471 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:03.471 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.743 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.004 00:13:04.004 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.004 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.004 11:52:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.262 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.262 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.262 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.262 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.262 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.263 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.263 { 00:13:04.263 "cntlid": 33, 00:13:04.263 "qid": 0, 00:13:04.263 "state": "enabled", 00:13:04.263 "thread": "nvmf_tgt_poll_group_000", 00:13:04.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:04.263 "listen_address": { 00:13:04.263 "trtype": "RDMA", 00:13:04.263 "adrfam": "IPv4", 00:13:04.263 "traddr": "192.168.100.8", 00:13:04.263 "trsvcid": "4420" 00:13:04.263 }, 00:13:04.263 "peer_address": { 00:13:04.263 "trtype": "RDMA", 00:13:04.263 "adrfam": "IPv4", 00:13:04.263 "traddr": "192.168.100.8", 00:13:04.263 "trsvcid": "52983" 00:13:04.263 }, 00:13:04.263 "auth": { 00:13:04.263 "state": "completed", 00:13:04.263 "digest": "sha256", 00:13:04.263 "dhgroup": "ffdhe6144" 00:13:04.263 } 00:13:04.263 } 00:13:04.263 ]' 00:13:04.263 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.263 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.263 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.263 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.263 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.521 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.521 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.521 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.521 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:04.521 11:52:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:05.457 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.458 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.458 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.458 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.458 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.458 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.458 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.458 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.025 00:13:06.025 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.025 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.025 11:52:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.025 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.025 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.025 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.025 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.025 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.025 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.025 { 00:13:06.025 "cntlid": 35, 00:13:06.025 "qid": 0, 00:13:06.025 "state": "enabled", 00:13:06.025 "thread": "nvmf_tgt_poll_group_000", 00:13:06.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:06.025 "listen_address": { 00:13:06.025 "trtype": "RDMA", 00:13:06.025 "adrfam": "IPv4", 00:13:06.025 "traddr": "192.168.100.8", 00:13:06.025 "trsvcid": "4420" 00:13:06.025 }, 00:13:06.025 "peer_address": { 00:13:06.025 "trtype": "RDMA", 00:13:06.025 "adrfam": "IPv4", 00:13:06.025 "traddr": "192.168.100.8", 00:13:06.025 "trsvcid": "53828" 00:13:06.025 }, 00:13:06.025 "auth": { 00:13:06.025 "state": "completed", 00:13:06.025 "digest": "sha256", 00:13:06.025 "dhgroup": "ffdhe6144" 00:13:06.025 } 00:13:06.025 } 00:13:06.025 ]' 00:13:06.025 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.025 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:06.025 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.284 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.284 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.284 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.284 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.284 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.542 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:06.542 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:07.112 11:52:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.112 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:07.112 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.112 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.112 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.112 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.112 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:07.112 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.371 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.629 00:13:07.629 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.629 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.629 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.887 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.887 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.887 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.887 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.887 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.887 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.887 { 00:13:07.887 "cntlid": 37, 00:13:07.887 "qid": 0, 00:13:07.887 "state": "enabled", 00:13:07.887 "thread": "nvmf_tgt_poll_group_000", 00:13:07.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:07.887 "listen_address": { 00:13:07.887 "trtype": "RDMA", 00:13:07.887 "adrfam": "IPv4", 00:13:07.887 "traddr": "192.168.100.8", 00:13:07.887 "trsvcid": "4420" 00:13:07.888 }, 00:13:07.888 "peer_address": { 00:13:07.888 "trtype": "RDMA", 00:13:07.888 "adrfam": "IPv4", 00:13:07.888 "traddr": "192.168.100.8", 00:13:07.888 "trsvcid": "50366" 00:13:07.888 }, 00:13:07.888 "auth": { 00:13:07.888 "state": "completed", 00:13:07.888 "digest": "sha256", 00:13:07.888 "dhgroup": "ffdhe6144" 00:13:07.888 } 00:13:07.888 } 00:13:07.888 ]' 00:13:07.888 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.888 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.888 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.146 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:08.146 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.146 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.146 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.146 11:52:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.404 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:08.404 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:08.970 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.970 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:08.970 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.970 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.970 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.970 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.970 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:08.970 11:52:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.228 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.486 00:13:09.486 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.486 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.486 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.744 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.744 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.744 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.744 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.744 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.744 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.744 { 00:13:09.744 "cntlid": 39, 00:13:09.744 "qid": 0, 00:13:09.744 "state": "enabled", 00:13:09.744 "thread": "nvmf_tgt_poll_group_000", 00:13:09.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:09.744 "listen_address": { 00:13:09.744 "trtype": "RDMA", 00:13:09.744 "adrfam": "IPv4", 00:13:09.744 "traddr": "192.168.100.8", 00:13:09.744 "trsvcid": "4420" 00:13:09.744 }, 00:13:09.744 "peer_address": { 00:13:09.744 "trtype": "RDMA", 00:13:09.744 "adrfam": "IPv4", 00:13:09.744 "traddr": "192.168.100.8", 00:13:09.744 "trsvcid": "41244" 00:13:09.744 }, 00:13:09.744 "auth": { 00:13:09.744 "state": "completed", 00:13:09.744 "digest": "sha256", 00:13:09.744 "dhgroup": "ffdhe6144" 00:13:09.744 } 00:13:09.744 } 00:13:09.744 ]' 00:13:09.744 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.744 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.744 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.003 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.003 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.003 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.003 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.003 11:52:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.003 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:10.003 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.938 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.939 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.939 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.939 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.939 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.939 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.939 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.939 11:52:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.507 00:13:11.507 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.507 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.507 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.765 { 00:13:11.765 "cntlid": 41, 00:13:11.765 "qid": 0, 00:13:11.765 "state": "enabled", 00:13:11.765 "thread": "nvmf_tgt_poll_group_000", 00:13:11.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:11.765 "listen_address": { 00:13:11.765 "trtype": "RDMA", 00:13:11.765 "adrfam": "IPv4", 00:13:11.765 "traddr": "192.168.100.8", 00:13:11.765 "trsvcid": "4420" 00:13:11.765 }, 00:13:11.765 "peer_address": { 00:13:11.765 "trtype": "RDMA", 00:13:11.765 "adrfam": "IPv4", 00:13:11.765 "traddr": "192.168.100.8", 00:13:11.765 "trsvcid": "33943" 00:13:11.765 }, 00:13:11.765 "auth": { 00:13:11.765 "state": "completed", 00:13:11.765 "digest": "sha256", 00:13:11.765 "dhgroup": "ffdhe8192" 00:13:11.765 } 00:13:11.765 } 00:13:11.765 ]' 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.765 11:52:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.025 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:12.025 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.962 11:52:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.530 00:13:13.530 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.530 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.530 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.790 { 00:13:13.790 "cntlid": 43, 00:13:13.790 "qid": 0, 00:13:13.790 "state": "enabled", 00:13:13.790 "thread": "nvmf_tgt_poll_group_000", 00:13:13.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:13.790 "listen_address": { 00:13:13.790 "trtype": "RDMA", 00:13:13.790 "adrfam": "IPv4", 00:13:13.790 "traddr": "192.168.100.8", 00:13:13.790 "trsvcid": "4420" 00:13:13.790 }, 00:13:13.790 "peer_address": { 00:13:13.790 "trtype": "RDMA", 00:13:13.790 "adrfam": "IPv4", 00:13:13.790 "traddr": "192.168.100.8", 00:13:13.790 "trsvcid": "49743" 00:13:13.790 }, 00:13:13.790 "auth": { 00:13:13.790 "state": "completed", 00:13:13.790 "digest": "sha256", 00:13:13.790 "dhgroup": "ffdhe8192" 00:13:13.790 } 00:13:13.790 } 00:13:13.790 ]' 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.790 11:52:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.050 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:14.050 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:14.619 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.878 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:14.878 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.878 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.878 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.878 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.878 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:14.878 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.138 11:52:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.397 00:13:15.656 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.656 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.656 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.656 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.656 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.656 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.657 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.657 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.657 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.657 { 00:13:15.657 "cntlid": 45, 00:13:15.657 "qid": 0, 00:13:15.657 "state": "enabled", 00:13:15.657 "thread": "nvmf_tgt_poll_group_000", 00:13:15.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:15.657 "listen_address": { 00:13:15.657 "trtype": "RDMA", 00:13:15.657 "adrfam": "IPv4", 00:13:15.657 "traddr": "192.168.100.8", 00:13:15.657 "trsvcid": "4420" 00:13:15.657 }, 00:13:15.657 "peer_address": { 00:13:15.657 "trtype": "RDMA", 00:13:15.657 "adrfam": "IPv4", 00:13:15.657 "traddr": "192.168.100.8", 00:13:15.657 "trsvcid": "42959" 00:13:15.657 }, 00:13:15.657 "auth": { 00:13:15.657 "state": "completed", 00:13:15.657 "digest": "sha256", 00:13:15.657 "dhgroup": "ffdhe8192" 00:13:15.657 } 00:13:15.657 } 00:13:15.657 ]' 00:13:15.657 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.657 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.657 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.916 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.916 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.916 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.916 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.916 11:52:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.175 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:16.175 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:16.744 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.744 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:16.744 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.744 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.744 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.744 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.744 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:16.744 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.003 11:52:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.577 00:13:17.577 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.577 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.577 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.836 { 00:13:17.836 "cntlid": 47, 00:13:17.836 "qid": 0, 00:13:17.836 "state": "enabled", 00:13:17.836 "thread": "nvmf_tgt_poll_group_000", 00:13:17.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:17.836 "listen_address": { 00:13:17.836 "trtype": "RDMA", 00:13:17.836 "adrfam": "IPv4", 00:13:17.836 "traddr": "192.168.100.8", 00:13:17.836 "trsvcid": "4420" 00:13:17.836 }, 00:13:17.836 "peer_address": { 00:13:17.836 "trtype": "RDMA", 00:13:17.836 "adrfam": "IPv4", 00:13:17.836 "traddr": "192.168.100.8", 00:13:17.836 "trsvcid": "57807" 00:13:17.836 }, 00:13:17.836 "auth": { 00:13:17.836 "state": "completed", 00:13:17.836 "digest": "sha256", 00:13:17.836 "dhgroup": "ffdhe8192" 00:13:17.836 } 00:13:17.836 } 00:13:17.836 ]' 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.836 11:52:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.095 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:18.095 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:18.660 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.919 11:52:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.178 00:13:19.178 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.178 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.178 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.437 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.437 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.437 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.437 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.437 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.437 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.437 { 00:13:19.437 "cntlid": 49, 00:13:19.437 "qid": 0, 00:13:19.437 "state": "enabled", 00:13:19.437 "thread": "nvmf_tgt_poll_group_000", 00:13:19.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:19.437 "listen_address": { 00:13:19.437 "trtype": "RDMA", 00:13:19.437 "adrfam": "IPv4", 00:13:19.437 "traddr": "192.168.100.8", 00:13:19.437 "trsvcid": "4420" 00:13:19.437 }, 00:13:19.437 "peer_address": { 00:13:19.437 "trtype": "RDMA", 00:13:19.437 "adrfam": "IPv4", 00:13:19.437 "traddr": "192.168.100.8", 00:13:19.437 "trsvcid": "49034" 00:13:19.437 }, 00:13:19.437 "auth": { 00:13:19.437 "state": "completed", 00:13:19.437 "digest": "sha384", 00:13:19.437 "dhgroup": "null" 00:13:19.437 } 00:13:19.437 } 00:13:19.437 ]' 00:13:19.437 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.437 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.437 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.696 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:19.696 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.696 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.696 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.696 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.954 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:19.955 11:52:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:20.521 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.521 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:20.521 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.521 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.521 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.521 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.521 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:20.521 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.780 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.038 00:13:21.038 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.038 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.038 11:52:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.296 { 00:13:21.296 "cntlid": 51, 00:13:21.296 "qid": 0, 00:13:21.296 "state": "enabled", 00:13:21.296 "thread": "nvmf_tgt_poll_group_000", 00:13:21.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:21.296 "listen_address": { 00:13:21.296 "trtype": "RDMA", 00:13:21.296 "adrfam": "IPv4", 00:13:21.296 "traddr": "192.168.100.8", 00:13:21.296 "trsvcid": "4420" 00:13:21.296 }, 00:13:21.296 "peer_address": { 00:13:21.296 "trtype": "RDMA", 00:13:21.296 "adrfam": "IPv4", 00:13:21.296 "traddr": "192.168.100.8", 00:13:21.296 "trsvcid": "37777" 00:13:21.296 }, 00:13:21.296 "auth": { 00:13:21.296 "state": "completed", 00:13:21.296 "digest": "sha384", 00:13:21.296 "dhgroup": "null" 00:13:21.296 } 00:13:21.296 } 00:13:21.296 ]' 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.296 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.555 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:21.555 11:52:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:22.122 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.381 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:22.381 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.381 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.381 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.381 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.381 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:22.381 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.640 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.899 00:13:22.899 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.899 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.899 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.899 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.899 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.899 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.899 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.156 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.156 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.156 { 00:13:23.156 "cntlid": 53, 00:13:23.156 "qid": 0, 00:13:23.156 "state": "enabled", 00:13:23.156 "thread": "nvmf_tgt_poll_group_000", 00:13:23.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:23.156 "listen_address": { 00:13:23.156 "trtype": "RDMA", 00:13:23.156 "adrfam": "IPv4", 00:13:23.156 "traddr": "192.168.100.8", 00:13:23.156 "trsvcid": "4420" 00:13:23.156 }, 00:13:23.156 "peer_address": { 00:13:23.156 "trtype": "RDMA", 00:13:23.156 "adrfam": "IPv4", 00:13:23.156 "traddr": "192.168.100.8", 00:13:23.156 "trsvcid": "37773" 00:13:23.156 }, 00:13:23.156 "auth": { 00:13:23.156 "state": "completed", 00:13:23.156 "digest": "sha384", 00:13:23.156 "dhgroup": "null" 00:13:23.156 } 00:13:23.156 } 00:13:23.156 ]' 00:13:23.156 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.156 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:23.156 11:52:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.156 11:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:23.156 11:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.156 11:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.156 11:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.156 11:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.414 11:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:23.414 11:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:23.982 11:52:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.982 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:23.982 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.982 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.982 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.982 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.982 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:23.982 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.241 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.499 00:13:24.499 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.499 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.499 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.757 { 00:13:24.757 "cntlid": 55, 00:13:24.757 "qid": 0, 00:13:24.757 "state": "enabled", 00:13:24.757 "thread": "nvmf_tgt_poll_group_000", 00:13:24.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:24.757 "listen_address": { 00:13:24.757 "trtype": "RDMA", 00:13:24.757 "adrfam": "IPv4", 00:13:24.757 "traddr": "192.168.100.8", 00:13:24.757 "trsvcid": "4420" 00:13:24.757 }, 00:13:24.757 "peer_address": { 00:13:24.757 "trtype": "RDMA", 00:13:24.757 "adrfam": "IPv4", 00:13:24.757 "traddr": "192.168.100.8", 00:13:24.757 "trsvcid": "37078" 00:13:24.757 }, 00:13:24.757 "auth": { 00:13:24.757 "state": "completed", 00:13:24.757 "digest": "sha384", 00:13:24.757 "dhgroup": "null" 00:13:24.757 } 00:13:24.757 } 00:13:24.757 ]' 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:24.757 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.016 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.016 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.016 11:52:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.016 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:25.016 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.951 11:52:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.210 00:13:26.210 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.210 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.210 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.468 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.468 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.468 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.469 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.469 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.469 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.469 { 00:13:26.469 "cntlid": 57, 00:13:26.469 "qid": 0, 00:13:26.469 "state": "enabled", 00:13:26.469 "thread": "nvmf_tgt_poll_group_000", 00:13:26.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:26.469 "listen_address": { 00:13:26.469 "trtype": "RDMA", 00:13:26.469 "adrfam": "IPv4", 00:13:26.469 "traddr": "192.168.100.8", 00:13:26.469 "trsvcid": "4420" 00:13:26.469 }, 00:13:26.469 "peer_address": { 00:13:26.469 "trtype": "RDMA", 00:13:26.469 "adrfam": "IPv4", 00:13:26.469 "traddr": "192.168.100.8", 00:13:26.469 "trsvcid": "56697" 00:13:26.469 }, 00:13:26.469 "auth": { 00:13:26.469 "state": "completed", 00:13:26.469 "digest": "sha384", 00:13:26.469 "dhgroup": "ffdhe2048" 00:13:26.469 } 00:13:26.469 } 00:13:26.469 ]' 00:13:26.469 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.469 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.469 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.727 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:26.727 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.727 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.727 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.727 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.727 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:26.727 11:52:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.661 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.920 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.920 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.920 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.920 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.920 00:13:27.920 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.920 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.920 11:52:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.178 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.178 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.178 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.178 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.178 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.178 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.178 { 00:13:28.178 "cntlid": 59, 00:13:28.178 "qid": 0, 00:13:28.178 "state": "enabled", 00:13:28.178 "thread": "nvmf_tgt_poll_group_000", 00:13:28.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:28.178 "listen_address": { 00:13:28.178 "trtype": "RDMA", 00:13:28.178 "adrfam": "IPv4", 00:13:28.178 "traddr": "192.168.100.8", 00:13:28.178 "trsvcid": "4420" 00:13:28.178 }, 00:13:28.178 "peer_address": { 00:13:28.178 "trtype": "RDMA", 00:13:28.178 "adrfam": "IPv4", 00:13:28.178 "traddr": "192.168.100.8", 00:13:28.178 "trsvcid": "48099" 00:13:28.178 }, 00:13:28.178 "auth": { 00:13:28.178 "state": "completed", 00:13:28.178 "digest": "sha384", 00:13:28.178 "dhgroup": "ffdhe2048" 00:13:28.178 } 00:13:28.178 } 00:13:28.178 ]' 00:13:28.178 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.178 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:28.178 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.436 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:28.436 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.436 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.436 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.436 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.695 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:28.695 11:52:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:29.260 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.260 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:29.260 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.260 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.260 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.260 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.260 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:29.260 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.518 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.519 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.519 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.776 00:13:29.776 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.777 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.777 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.035 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.035 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.035 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.035 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.035 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.035 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.035 { 00:13:30.035 "cntlid": 61, 00:13:30.035 "qid": 0, 00:13:30.035 "state": "enabled", 00:13:30.035 "thread": "nvmf_tgt_poll_group_000", 00:13:30.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:30.035 "listen_address": { 00:13:30.035 "trtype": "RDMA", 00:13:30.035 "adrfam": "IPv4", 00:13:30.035 "traddr": "192.168.100.8", 00:13:30.035 "trsvcid": "4420" 00:13:30.035 }, 00:13:30.035 "peer_address": { 00:13:30.035 "trtype": "RDMA", 00:13:30.035 "adrfam": "IPv4", 00:13:30.035 "traddr": "192.168.100.8", 00:13:30.035 "trsvcid": "40991" 00:13:30.035 }, 00:13:30.035 "auth": { 00:13:30.035 "state": "completed", 00:13:30.035 "digest": "sha384", 00:13:30.035 "dhgroup": "ffdhe2048" 00:13:30.035 } 00:13:30.035 } 00:13:30.035 ]' 00:13:30.035 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.035 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.035 11:52:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.035 11:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:30.035 11:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.035 11:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.035 11:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.035 11:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.293 11:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:30.293 11:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:30.860 11:52:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.118 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:31.118 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.118 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.118 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.118 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.118 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:31.118 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.377 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.635 00:13:31.635 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.635 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.635 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.893 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.893 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.893 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.894 { 00:13:31.894 "cntlid": 63, 00:13:31.894 "qid": 0, 00:13:31.894 "state": "enabled", 00:13:31.894 "thread": "nvmf_tgt_poll_group_000", 00:13:31.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:31.894 "listen_address": { 00:13:31.894 "trtype": "RDMA", 00:13:31.894 "adrfam": "IPv4", 00:13:31.894 "traddr": "192.168.100.8", 00:13:31.894 "trsvcid": "4420" 00:13:31.894 }, 00:13:31.894 "peer_address": { 00:13:31.894 "trtype": "RDMA", 00:13:31.894 "adrfam": "IPv4", 00:13:31.894 "traddr": "192.168.100.8", 00:13:31.894 "trsvcid": "44215" 00:13:31.894 }, 00:13:31.894 "auth": { 00:13:31.894 "state": "completed", 00:13:31.894 "digest": "sha384", 00:13:31.894 "dhgroup": "ffdhe2048" 00:13:31.894 } 00:13:31.894 } 00:13:31.894 ]' 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.894 11:52:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.153 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:32.153 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:32.728 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.728 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:32.728 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.728 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.728 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.728 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:32.728 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.728 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:32.728 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:32.986 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:32.986 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.986 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:32.986 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:32.986 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:32.986 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.987 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.987 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.987 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.987 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.987 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.987 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.987 11:52:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.245 00:13:33.245 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.245 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.245 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.504 { 00:13:33.504 "cntlid": 65, 00:13:33.504 "qid": 0, 00:13:33.504 "state": "enabled", 00:13:33.504 "thread": "nvmf_tgt_poll_group_000", 00:13:33.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:33.504 "listen_address": { 00:13:33.504 "trtype": "RDMA", 00:13:33.504 "adrfam": "IPv4", 00:13:33.504 "traddr": "192.168.100.8", 00:13:33.504 "trsvcid": "4420" 00:13:33.504 }, 00:13:33.504 "peer_address": { 00:13:33.504 "trtype": "RDMA", 00:13:33.504 "adrfam": "IPv4", 00:13:33.504 "traddr": "192.168.100.8", 00:13:33.504 "trsvcid": "54220" 00:13:33.504 }, 00:13:33.504 "auth": { 00:13:33.504 "state": "completed", 00:13:33.504 "digest": "sha384", 00:13:33.504 "dhgroup": "ffdhe3072" 00:13:33.504 } 00:13:33.504 } 00:13:33.504 ]' 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:33.504 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.763 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.763 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.763 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.763 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:33.763 11:52:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.720 11:52:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.978 00:13:34.978 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.978 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.978 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.236 { 00:13:35.236 "cntlid": 67, 00:13:35.236 "qid": 0, 00:13:35.236 "state": "enabled", 00:13:35.236 "thread": "nvmf_tgt_poll_group_000", 00:13:35.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:35.236 "listen_address": { 00:13:35.236 "trtype": "RDMA", 00:13:35.236 "adrfam": "IPv4", 00:13:35.236 "traddr": "192.168.100.8", 00:13:35.236 "trsvcid": "4420" 00:13:35.236 }, 00:13:35.236 "peer_address": { 00:13:35.236 "trtype": "RDMA", 00:13:35.236 "adrfam": "IPv4", 00:13:35.236 "traddr": "192.168.100.8", 00:13:35.236 "trsvcid": "46529" 00:13:35.236 }, 00:13:35.236 "auth": { 00:13:35.236 "state": "completed", 00:13:35.236 "digest": "sha384", 00:13:35.236 "dhgroup": "ffdhe3072" 00:13:35.236 } 00:13:35.236 } 00:13:35.236 ]' 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.236 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:35.237 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.495 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.495 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.495 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.754 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:35.754 11:52:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:36.322 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.322 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:36.322 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.322 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.322 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.322 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.322 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:36.322 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.581 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.582 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.840 00:13:36.840 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.840 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.840 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.099 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.099 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.099 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.099 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.099 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.099 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.099 { 00:13:37.099 "cntlid": 69, 00:13:37.099 "qid": 0, 00:13:37.099 "state": "enabled", 00:13:37.099 "thread": "nvmf_tgt_poll_group_000", 00:13:37.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:37.099 "listen_address": { 00:13:37.099 "trtype": "RDMA", 00:13:37.099 "adrfam": "IPv4", 00:13:37.099 "traddr": "192.168.100.8", 00:13:37.099 "trsvcid": "4420" 00:13:37.099 }, 00:13:37.099 "peer_address": { 00:13:37.099 "trtype": "RDMA", 00:13:37.099 "adrfam": "IPv4", 00:13:37.099 "traddr": "192.168.100.8", 00:13:37.099 "trsvcid": "56945" 00:13:37.099 }, 00:13:37.099 "auth": { 00:13:37.099 "state": "completed", 00:13:37.099 "digest": "sha384", 00:13:37.099 "dhgroup": "ffdhe3072" 00:13:37.099 } 00:13:37.099 } 00:13:37.099 ]' 00:13:37.099 11:52:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.099 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:37.099 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.099 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.099 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.099 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.099 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.099 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.357 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:37.357 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:37.923 11:52:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.181 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:38.181 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.181 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.181 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.181 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.181 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:38.181 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:38.439 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:38.439 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.439 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:38.439 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:38.439 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:38.440 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.440 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:13:38.440 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.440 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.440 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.440 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:38.440 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.440 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.699 00:13:38.699 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.699 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.699 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.699 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.957 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.957 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.958 { 00:13:38.958 "cntlid": 71, 00:13:38.958 "qid": 0, 00:13:38.958 "state": "enabled", 00:13:38.958 "thread": "nvmf_tgt_poll_group_000", 00:13:38.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:38.958 "listen_address": { 00:13:38.958 "trtype": "RDMA", 00:13:38.958 "adrfam": "IPv4", 00:13:38.958 "traddr": "192.168.100.8", 00:13:38.958 "trsvcid": "4420" 00:13:38.958 }, 00:13:38.958 "peer_address": { 00:13:38.958 "trtype": "RDMA", 00:13:38.958 "adrfam": "IPv4", 00:13:38.958 "traddr": "192.168.100.8", 00:13:38.958 "trsvcid": "34957" 00:13:38.958 }, 00:13:38.958 "auth": { 00:13:38.958 "state": "completed", 00:13:38.958 "digest": "sha384", 00:13:38.958 "dhgroup": "ffdhe3072" 00:13:38.958 } 00:13:38.958 } 00:13:38.958 ]' 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.958 11:52:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.216 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:39.216 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:39.783 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.784 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:39.784 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.784 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.784 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.784 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:39.784 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.784 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:39.784 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:40.042 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:40.042 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.042 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:40.042 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:40.042 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:40.042 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.042 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.042 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.042 11:52:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.042 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.042 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.042 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.042 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.300 00:13:40.300 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.301 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.301 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.560 { 00:13:40.560 "cntlid": 73, 00:13:40.560 "qid": 0, 00:13:40.560 "state": "enabled", 00:13:40.560 "thread": "nvmf_tgt_poll_group_000", 00:13:40.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:40.560 "listen_address": { 00:13:40.560 "trtype": "RDMA", 00:13:40.560 "adrfam": "IPv4", 00:13:40.560 "traddr": "192.168.100.8", 00:13:40.560 "trsvcid": "4420" 00:13:40.560 }, 00:13:40.560 "peer_address": { 00:13:40.560 "trtype": "RDMA", 00:13:40.560 "adrfam": "IPv4", 00:13:40.560 "traddr": "192.168.100.8", 00:13:40.560 "trsvcid": "54690" 00:13:40.560 }, 00:13:40.560 "auth": { 00:13:40.560 "state": "completed", 00:13:40.560 "digest": "sha384", 00:13:40.560 "dhgroup": "ffdhe4096" 00:13:40.560 } 00:13:40.560 } 00:13:40.560 ]' 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:40.560 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.819 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.819 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.819 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.819 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:40.819 11:52:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.754 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.755 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.755 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.755 11:52:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.013 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.271 { 00:13:42.271 "cntlid": 75, 00:13:42.271 "qid": 0, 00:13:42.271 "state": "enabled", 00:13:42.271 "thread": "nvmf_tgt_poll_group_000", 00:13:42.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:42.271 "listen_address": { 00:13:42.271 "trtype": "RDMA", 00:13:42.271 "adrfam": "IPv4", 00:13:42.271 "traddr": "192.168.100.8", 00:13:42.271 "trsvcid": "4420" 00:13:42.271 }, 00:13:42.271 "peer_address": { 00:13:42.271 "trtype": "RDMA", 00:13:42.271 "adrfam": "IPv4", 00:13:42.271 "traddr": "192.168.100.8", 00:13:42.271 "trsvcid": "41425" 00:13:42.271 }, 00:13:42.271 "auth": { 00:13:42.271 "state": "completed", 00:13:42.271 "digest": "sha384", 00:13:42.271 "dhgroup": "ffdhe4096" 00:13:42.271 } 00:13:42.271 } 00:13:42.271 ]' 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:42.271 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.529 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:42.529 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.529 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.529 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.529 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.786 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:42.786 11:52:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:43.353 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.353 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:43.353 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.353 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.353 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.353 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.353 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:43.353 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.611 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.869 00:13:43.869 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.869 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.869 11:52:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.127 { 00:13:44.127 "cntlid": 77, 00:13:44.127 "qid": 0, 00:13:44.127 "state": "enabled", 00:13:44.127 "thread": "nvmf_tgt_poll_group_000", 00:13:44.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:44.127 "listen_address": { 00:13:44.127 "trtype": "RDMA", 00:13:44.127 "adrfam": "IPv4", 00:13:44.127 "traddr": "192.168.100.8", 00:13:44.127 "trsvcid": "4420" 00:13:44.127 }, 00:13:44.127 "peer_address": { 00:13:44.127 "trtype": "RDMA", 00:13:44.127 "adrfam": "IPv4", 00:13:44.127 "traddr": "192.168.100.8", 00:13:44.127 "trsvcid": "51014" 00:13:44.127 }, 00:13:44.127 "auth": { 00:13:44.127 "state": "completed", 00:13:44.127 "digest": "sha384", 00:13:44.127 "dhgroup": "ffdhe4096" 00:13:44.127 } 00:13:44.127 } 00:13:44.127 ]' 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.127 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.387 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:44.387 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:44.954 11:52:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.212 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:45.212 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.212 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.212 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.212 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.212 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:45.212 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.470 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.729 00:13:45.729 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.729 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.729 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.988 { 00:13:45.988 "cntlid": 79, 00:13:45.988 "qid": 0, 00:13:45.988 "state": "enabled", 00:13:45.988 "thread": "nvmf_tgt_poll_group_000", 00:13:45.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:45.988 "listen_address": { 00:13:45.988 "trtype": "RDMA", 00:13:45.988 "adrfam": "IPv4", 00:13:45.988 "traddr": "192.168.100.8", 00:13:45.988 "trsvcid": "4420" 00:13:45.988 }, 00:13:45.988 "peer_address": { 00:13:45.988 "trtype": "RDMA", 00:13:45.988 "adrfam": "IPv4", 00:13:45.988 "traddr": "192.168.100.8", 00:13:45.988 "trsvcid": "41376" 00:13:45.988 }, 00:13:45.988 "auth": { 00:13:45.988 "state": "completed", 00:13:45.988 "digest": "sha384", 00:13:45.988 "dhgroup": "ffdhe4096" 00:13:45.988 } 00:13:45.988 } 00:13:45.988 ]' 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.988 11:52:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.246 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:46.246 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:46.816 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.075 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:47.075 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.075 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.075 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.075 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.075 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.075 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:47.075 11:52:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.075 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.642 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.642 { 00:13:47.642 "cntlid": 81, 00:13:47.642 "qid": 0, 00:13:47.642 "state": "enabled", 00:13:47.642 "thread": "nvmf_tgt_poll_group_000", 00:13:47.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:47.642 "listen_address": { 00:13:47.642 "trtype": "RDMA", 00:13:47.642 "adrfam": "IPv4", 00:13:47.642 "traddr": "192.168.100.8", 00:13:47.642 "trsvcid": "4420" 00:13:47.642 }, 00:13:47.642 "peer_address": { 00:13:47.642 "trtype": "RDMA", 00:13:47.642 "adrfam": "IPv4", 00:13:47.642 "traddr": "192.168.100.8", 00:13:47.642 "trsvcid": "48111" 00:13:47.642 }, 00:13:47.642 "auth": { 00:13:47.642 "state": "completed", 00:13:47.642 "digest": "sha384", 00:13:47.642 "dhgroup": "ffdhe6144" 00:13:47.642 } 00:13:47.642 } 00:13:47.642 ]' 00:13:47.642 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.901 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.901 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.901 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:47.901 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.901 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.901 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.901 11:52:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.161 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:48.161 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:48.728 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.728 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:48.728 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.728 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.728 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.728 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.728 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:48.728 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.986 11:52:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.245 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.503 { 00:13:49.503 "cntlid": 83, 00:13:49.503 "qid": 0, 00:13:49.503 "state": "enabled", 00:13:49.503 "thread": "nvmf_tgt_poll_group_000", 00:13:49.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:49.503 "listen_address": { 00:13:49.503 "trtype": "RDMA", 00:13:49.503 "adrfam": "IPv4", 00:13:49.503 "traddr": "192.168.100.8", 00:13:49.503 "trsvcid": "4420" 00:13:49.503 }, 00:13:49.503 "peer_address": { 00:13:49.503 "trtype": "RDMA", 00:13:49.503 "adrfam": "IPv4", 00:13:49.503 "traddr": "192.168.100.8", 00:13:49.503 "trsvcid": "32802" 00:13:49.503 }, 00:13:49.503 "auth": { 00:13:49.503 "state": "completed", 00:13:49.503 "digest": "sha384", 00:13:49.503 "dhgroup": "ffdhe6144" 00:13:49.503 } 00:13:49.503 } 00:13:49.503 ]' 00:13:49.503 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.761 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.761 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.761 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:49.761 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.761 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.761 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.761 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.018 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:50.018 11:52:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:50.583 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.583 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:50.583 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.583 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.583 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.583 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.583 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:50.583 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.841 11:52:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.099 00:13:51.099 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.099 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.099 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.357 { 00:13:51.357 "cntlid": 85, 00:13:51.357 "qid": 0, 00:13:51.357 "state": "enabled", 00:13:51.357 "thread": "nvmf_tgt_poll_group_000", 00:13:51.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:51.357 "listen_address": { 00:13:51.357 "trtype": "RDMA", 00:13:51.357 "adrfam": "IPv4", 00:13:51.357 "traddr": "192.168.100.8", 00:13:51.357 "trsvcid": "4420" 00:13:51.357 }, 00:13:51.357 "peer_address": { 00:13:51.357 "trtype": "RDMA", 00:13:51.357 "adrfam": "IPv4", 00:13:51.357 "traddr": "192.168.100.8", 00:13:51.357 "trsvcid": "43696" 00:13:51.357 }, 00:13:51.357 "auth": { 00:13:51.357 "state": "completed", 00:13:51.357 "digest": "sha384", 00:13:51.357 "dhgroup": "ffdhe6144" 00:13:51.357 } 00:13:51.357 } 00:13:51.357 ]' 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.357 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:51.616 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.616 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.616 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.616 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.616 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:51.616 11:52:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:52.551 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.551 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:52.551 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.551 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.551 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.551 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.551 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:52.551 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.809 11:53:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.067 00:13:53.067 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.067 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.067 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.326 { 00:13:53.326 "cntlid": 87, 00:13:53.326 "qid": 0, 00:13:53.326 "state": "enabled", 00:13:53.326 "thread": "nvmf_tgt_poll_group_000", 00:13:53.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:53.326 "listen_address": { 00:13:53.326 "trtype": "RDMA", 00:13:53.326 "adrfam": "IPv4", 00:13:53.326 "traddr": "192.168.100.8", 00:13:53.326 "trsvcid": "4420" 00:13:53.326 }, 00:13:53.326 "peer_address": { 00:13:53.326 "trtype": "RDMA", 00:13:53.326 "adrfam": "IPv4", 00:13:53.326 "traddr": "192.168.100.8", 00:13:53.326 "trsvcid": "39587" 00:13:53.326 }, 00:13:53.326 "auth": { 00:13:53.326 "state": "completed", 00:13:53.326 "digest": "sha384", 00:13:53.326 "dhgroup": "ffdhe6144" 00:13:53.326 } 00:13:53.326 } 00:13:53.326 ]' 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.326 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.584 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:53.584 11:53:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:13:54.150 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.409 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:54.409 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.409 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.409 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.409 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.409 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.409 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:54.409 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.668 11:53:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.235 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.235 { 00:13:55.235 "cntlid": 89, 00:13:55.235 "qid": 0, 00:13:55.235 "state": "enabled", 00:13:55.235 "thread": "nvmf_tgt_poll_group_000", 00:13:55.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:55.235 "listen_address": { 00:13:55.235 "trtype": "RDMA", 00:13:55.235 "adrfam": "IPv4", 00:13:55.235 "traddr": "192.168.100.8", 00:13:55.235 "trsvcid": "4420" 00:13:55.235 }, 00:13:55.235 "peer_address": { 00:13:55.235 "trtype": "RDMA", 00:13:55.235 "adrfam": "IPv4", 00:13:55.235 "traddr": "192.168.100.8", 00:13:55.235 "trsvcid": "53012" 00:13:55.235 }, 00:13:55.235 "auth": { 00:13:55.235 "state": "completed", 00:13:55.235 "digest": "sha384", 00:13:55.235 "dhgroup": "ffdhe8192" 00:13:55.235 } 00:13:55.235 } 00:13:55.235 ]' 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.235 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.493 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.493 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:55.493 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.493 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.493 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.493 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.751 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:55.751 11:53:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:13:56.317 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.317 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:56.317 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.317 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.317 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.317 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.317 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:56.317 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.575 11:53:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.141 00:13:57.141 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.141 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.141 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.399 { 00:13:57.399 "cntlid": 91, 00:13:57.399 "qid": 0, 00:13:57.399 "state": "enabled", 00:13:57.399 "thread": "nvmf_tgt_poll_group_000", 00:13:57.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:57.399 "listen_address": { 00:13:57.399 "trtype": "RDMA", 00:13:57.399 "adrfam": "IPv4", 00:13:57.399 "traddr": "192.168.100.8", 00:13:57.399 "trsvcid": "4420" 00:13:57.399 }, 00:13:57.399 "peer_address": { 00:13:57.399 "trtype": "RDMA", 00:13:57.399 "adrfam": "IPv4", 00:13:57.399 "traddr": "192.168.100.8", 00:13:57.399 "trsvcid": "33883" 00:13:57.399 }, 00:13:57.399 "auth": { 00:13:57.399 "state": "completed", 00:13:57.399 "digest": "sha384", 00:13:57.399 "dhgroup": "ffdhe8192" 00:13:57.399 } 00:13:57.399 } 00:13:57.399 ]' 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.399 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.658 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:57.658 11:53:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:13:58.224 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.482 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:58.482 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.483 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.049 00:13:59.049 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.049 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.049 11:53:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.307 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.307 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.307 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.307 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.307 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.307 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.307 { 00:13:59.307 "cntlid": 93, 00:13:59.307 "qid": 0, 00:13:59.307 "state": "enabled", 00:13:59.307 "thread": "nvmf_tgt_poll_group_000", 00:13:59.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:13:59.307 "listen_address": { 00:13:59.307 "trtype": "RDMA", 00:13:59.307 "adrfam": "IPv4", 00:13:59.307 "traddr": "192.168.100.8", 00:13:59.307 "trsvcid": "4420" 00:13:59.307 }, 00:13:59.307 "peer_address": { 00:13:59.307 "trtype": "RDMA", 00:13:59.307 "adrfam": "IPv4", 00:13:59.307 "traddr": "192.168.100.8", 00:13:59.307 "trsvcid": "39654" 00:13:59.308 }, 00:13:59.308 "auth": { 00:13:59.308 "state": "completed", 00:13:59.308 "digest": "sha384", 00:13:59.308 "dhgroup": "ffdhe8192" 00:13:59.308 } 00:13:59.308 } 00:13:59.308 ]' 00:13:59.308 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.308 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.308 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.308 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:59.308 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.308 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.308 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.308 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.566 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:13:59.566 11:53:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:00.133 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.391 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:00.391 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.391 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.391 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.391 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.391 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:00.391 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:00.649 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.216 00:14:01.216 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.216 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.216 11:53:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.216 { 00:14:01.216 "cntlid": 95, 00:14:01.216 "qid": 0, 00:14:01.216 "state": "enabled", 00:14:01.216 "thread": "nvmf_tgt_poll_group_000", 00:14:01.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:01.216 "listen_address": { 00:14:01.216 "trtype": "RDMA", 00:14:01.216 "adrfam": "IPv4", 00:14:01.216 "traddr": "192.168.100.8", 00:14:01.216 "trsvcid": "4420" 00:14:01.216 }, 00:14:01.216 "peer_address": { 00:14:01.216 "trtype": "RDMA", 00:14:01.216 "adrfam": "IPv4", 00:14:01.216 "traddr": "192.168.100.8", 00:14:01.216 "trsvcid": "49388" 00:14:01.216 }, 00:14:01.216 "auth": { 00:14:01.216 "state": "completed", 00:14:01.216 "digest": "sha384", 00:14:01.216 "dhgroup": "ffdhe8192" 00:14:01.216 } 00:14:01.216 } 00:14:01.216 ]' 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.216 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:01.475 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.475 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.475 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.475 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.734 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:01.734 11:53:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:02.302 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.562 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.821 00:14:02.821 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.821 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.821 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.081 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.081 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.081 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.081 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.081 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.081 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.081 { 00:14:03.081 "cntlid": 97, 00:14:03.081 "qid": 0, 00:14:03.081 "state": "enabled", 00:14:03.081 "thread": "nvmf_tgt_poll_group_000", 00:14:03.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:03.081 "listen_address": { 00:14:03.081 "trtype": "RDMA", 00:14:03.081 "adrfam": "IPv4", 00:14:03.081 "traddr": "192.168.100.8", 00:14:03.081 "trsvcid": "4420" 00:14:03.081 }, 00:14:03.081 "peer_address": { 00:14:03.081 "trtype": "RDMA", 00:14:03.081 "adrfam": "IPv4", 00:14:03.081 "traddr": "192.168.100.8", 00:14:03.081 "trsvcid": "56137" 00:14:03.081 }, 00:14:03.081 "auth": { 00:14:03.081 "state": "completed", 00:14:03.081 "digest": "sha512", 00:14:03.081 "dhgroup": "null" 00:14:03.081 } 00:14:03.081 } 00:14:03.081 ]' 00:14:03.081 11:53:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.081 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.081 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.081 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:03.081 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.081 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.081 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.081 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.340 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:03.340 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:03.908 11:53:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.196 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.454 00:14:04.454 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.454 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.454 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.713 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.713 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.713 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.713 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.713 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.713 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.713 { 00:14:04.713 "cntlid": 99, 00:14:04.713 "qid": 0, 00:14:04.713 "state": "enabled", 00:14:04.713 "thread": "nvmf_tgt_poll_group_000", 00:14:04.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:04.713 "listen_address": { 00:14:04.713 "trtype": "RDMA", 00:14:04.713 "adrfam": "IPv4", 00:14:04.713 "traddr": "192.168.100.8", 00:14:04.713 "trsvcid": "4420" 00:14:04.713 }, 00:14:04.713 "peer_address": { 00:14:04.713 "trtype": "RDMA", 00:14:04.713 "adrfam": "IPv4", 00:14:04.713 "traddr": "192.168.100.8", 00:14:04.713 "trsvcid": "57890" 00:14:04.713 }, 00:14:04.713 "auth": { 00:14:04.713 "state": "completed", 00:14:04.713 "digest": "sha512", 00:14:04.713 "dhgroup": "null" 00:14:04.713 } 00:14:04.713 } 00:14:04.713 ]' 00:14:04.713 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.713 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:04.713 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.971 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:04.971 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.971 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.971 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.971 11:53:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.230 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:05.230 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:05.798 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.798 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:05.798 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.798 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.798 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.798 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.798 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:05.798 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.057 11:53:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.316 00:14:06.316 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.316 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.316 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.575 { 00:14:06.575 "cntlid": 101, 00:14:06.575 "qid": 0, 00:14:06.575 "state": "enabled", 00:14:06.575 "thread": "nvmf_tgt_poll_group_000", 00:14:06.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:06.575 "listen_address": { 00:14:06.575 "trtype": "RDMA", 00:14:06.575 "adrfam": "IPv4", 00:14:06.575 "traddr": "192.168.100.8", 00:14:06.575 "trsvcid": "4420" 00:14:06.575 }, 00:14:06.575 "peer_address": { 00:14:06.575 "trtype": "RDMA", 00:14:06.575 "adrfam": "IPv4", 00:14:06.575 "traddr": "192.168.100.8", 00:14:06.575 "trsvcid": "35945" 00:14:06.575 }, 00:14:06.575 "auth": { 00:14:06.575 "state": "completed", 00:14:06.575 "digest": "sha512", 00:14:06.575 "dhgroup": "null" 00:14:06.575 } 00:14:06.575 } 00:14:06.575 ]' 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.575 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.834 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:06.834 11:53:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:07.400 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.659 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:07.659 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.659 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.659 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.659 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.659 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:07.659 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:07.918 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:07.918 00:14:08.177 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.177 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.177 11:53:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.177 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.177 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.177 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.177 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.177 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.177 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.177 { 00:14:08.177 "cntlid": 103, 00:14:08.177 "qid": 0, 00:14:08.177 "state": "enabled", 00:14:08.177 "thread": "nvmf_tgt_poll_group_000", 00:14:08.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:08.177 "listen_address": { 00:14:08.177 "trtype": "RDMA", 00:14:08.177 "adrfam": "IPv4", 00:14:08.177 "traddr": "192.168.100.8", 00:14:08.177 "trsvcid": "4420" 00:14:08.177 }, 00:14:08.177 "peer_address": { 00:14:08.177 "trtype": "RDMA", 00:14:08.177 "adrfam": "IPv4", 00:14:08.177 "traddr": "192.168.100.8", 00:14:08.177 "trsvcid": "55296" 00:14:08.177 }, 00:14:08.177 "auth": { 00:14:08.177 "state": "completed", 00:14:08.177 "digest": "sha512", 00:14:08.177 "dhgroup": "null" 00:14:08.177 } 00:14:08.177 } 00:14:08.177 ]' 00:14:08.177 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.436 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.436 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.436 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:08.436 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.436 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.436 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.436 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.702 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:08.702 11:53:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:09.271 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.271 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:09.271 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.271 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.271 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.271 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:09.271 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.271 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:09.271 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.530 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.789 00:14:09.789 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.789 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.789 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.048 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.048 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.048 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.048 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.048 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.048 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.048 { 00:14:10.048 "cntlid": 105, 00:14:10.048 "qid": 0, 00:14:10.048 "state": "enabled", 00:14:10.048 "thread": "nvmf_tgt_poll_group_000", 00:14:10.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:10.048 "listen_address": { 00:14:10.048 "trtype": "RDMA", 00:14:10.048 "adrfam": "IPv4", 00:14:10.048 "traddr": "192.168.100.8", 00:14:10.048 "trsvcid": "4420" 00:14:10.048 }, 00:14:10.048 "peer_address": { 00:14:10.048 "trtype": "RDMA", 00:14:10.048 "adrfam": "IPv4", 00:14:10.048 "traddr": "192.168.100.8", 00:14:10.048 "trsvcid": "54599" 00:14:10.048 }, 00:14:10.048 "auth": { 00:14:10.048 "state": "completed", 00:14:10.048 "digest": "sha512", 00:14:10.048 "dhgroup": "ffdhe2048" 00:14:10.048 } 00:14:10.048 } 00:14:10.048 ]' 00:14:10.048 11:53:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.048 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:10.048 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.048 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:10.048 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.306 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.306 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.306 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.306 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:10.306 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:10.873 11:53:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.132 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:11.132 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.132 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.132 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.132 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.132 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:11.132 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.391 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.650 00:14:11.650 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.650 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.650 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.650 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.650 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.650 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.650 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.909 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.909 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.909 { 00:14:11.909 "cntlid": 107, 00:14:11.910 "qid": 0, 00:14:11.910 "state": "enabled", 00:14:11.910 "thread": "nvmf_tgt_poll_group_000", 00:14:11.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:11.910 "listen_address": { 00:14:11.910 "trtype": "RDMA", 00:14:11.910 "adrfam": "IPv4", 00:14:11.910 "traddr": "192.168.100.8", 00:14:11.910 "trsvcid": "4420" 00:14:11.910 }, 00:14:11.910 "peer_address": { 00:14:11.910 "trtype": "RDMA", 00:14:11.910 "adrfam": "IPv4", 00:14:11.910 "traddr": "192.168.100.8", 00:14:11.910 "trsvcid": "44438" 00:14:11.910 }, 00:14:11.910 "auth": { 00:14:11.910 "state": "completed", 00:14:11.910 "digest": "sha512", 00:14:11.910 "dhgroup": "ffdhe2048" 00:14:11.910 } 00:14:11.910 } 00:14:11.910 ]' 00:14:11.910 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.910 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.910 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.910 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.910 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.910 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.910 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.910 11:53:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.168 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:12.168 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:12.737 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.737 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:12.737 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.737 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.737 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.737 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.737 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:12.737 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.996 11:53:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.256 00:14:13.256 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.256 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.256 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.515 { 00:14:13.515 "cntlid": 109, 00:14:13.515 "qid": 0, 00:14:13.515 "state": "enabled", 00:14:13.515 "thread": "nvmf_tgt_poll_group_000", 00:14:13.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:13.515 "listen_address": { 00:14:13.515 "trtype": "RDMA", 00:14:13.515 "adrfam": "IPv4", 00:14:13.515 "traddr": "192.168.100.8", 00:14:13.515 "trsvcid": "4420" 00:14:13.515 }, 00:14:13.515 "peer_address": { 00:14:13.515 "trtype": "RDMA", 00:14:13.515 "adrfam": "IPv4", 00:14:13.515 "traddr": "192.168.100.8", 00:14:13.515 "trsvcid": "39407" 00:14:13.515 }, 00:14:13.515 "auth": { 00:14:13.515 "state": "completed", 00:14:13.515 "digest": "sha512", 00:14:13.515 "dhgroup": "ffdhe2048" 00:14:13.515 } 00:14:13.515 } 00:14:13.515 ]' 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.515 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.775 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.775 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.775 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.775 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:13.775 11:53:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.712 11:53:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.971 00:14:14.971 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.971 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.971 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.539 { 00:14:15.539 "cntlid": 111, 00:14:15.539 "qid": 0, 00:14:15.539 "state": "enabled", 00:14:15.539 "thread": "nvmf_tgt_poll_group_000", 00:14:15.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:15.539 "listen_address": { 00:14:15.539 "trtype": "RDMA", 00:14:15.539 "adrfam": "IPv4", 00:14:15.539 "traddr": "192.168.100.8", 00:14:15.539 "trsvcid": "4420" 00:14:15.539 }, 00:14:15.539 "peer_address": { 00:14:15.539 "trtype": "RDMA", 00:14:15.539 "adrfam": "IPv4", 00:14:15.539 "traddr": "192.168.100.8", 00:14:15.539 "trsvcid": "45046" 00:14:15.539 }, 00:14:15.539 "auth": { 00:14:15.539 "state": "completed", 00:14:15.539 "digest": "sha512", 00:14:15.539 "dhgroup": "ffdhe2048" 00:14:15.539 } 00:14:15.539 } 00:14:15.539 ]' 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.539 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.798 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:15.798 11:53:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:16.364 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.364 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:16.364 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.364 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.364 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.364 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.364 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.364 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:16.364 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.623 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.881 00:14:16.881 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.881 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.881 11:53:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.139 { 00:14:17.139 "cntlid": 113, 00:14:17.139 "qid": 0, 00:14:17.139 "state": "enabled", 00:14:17.139 "thread": "nvmf_tgt_poll_group_000", 00:14:17.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:17.139 "listen_address": { 00:14:17.139 "trtype": "RDMA", 00:14:17.139 "adrfam": "IPv4", 00:14:17.139 "traddr": "192.168.100.8", 00:14:17.139 "trsvcid": "4420" 00:14:17.139 }, 00:14:17.139 "peer_address": { 00:14:17.139 "trtype": "RDMA", 00:14:17.139 "adrfam": "IPv4", 00:14:17.139 "traddr": "192.168.100.8", 00:14:17.139 "trsvcid": "52518" 00:14:17.139 }, 00:14:17.139 "auth": { 00:14:17.139 "state": "completed", 00:14:17.139 "digest": "sha512", 00:14:17.139 "dhgroup": "ffdhe3072" 00:14:17.139 } 00:14:17.139 } 00:14:17.139 ]' 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.139 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.397 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:17.397 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:17.963 11:53:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.221 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:18.221 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.221 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.221 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.221 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.221 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:18.221 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:18.479 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:18.479 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.480 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.738 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.738 { 00:14:18.738 "cntlid": 115, 00:14:18.738 "qid": 0, 00:14:18.738 "state": "enabled", 00:14:18.738 "thread": "nvmf_tgt_poll_group_000", 00:14:18.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:18.738 "listen_address": { 00:14:18.738 "trtype": "RDMA", 00:14:18.738 "adrfam": "IPv4", 00:14:18.738 "traddr": "192.168.100.8", 00:14:18.738 "trsvcid": "4420" 00:14:18.738 }, 00:14:18.738 "peer_address": { 00:14:18.738 "trtype": "RDMA", 00:14:18.738 "adrfam": "IPv4", 00:14:18.738 "traddr": "192.168.100.8", 00:14:18.738 "trsvcid": "36075" 00:14:18.738 }, 00:14:18.738 "auth": { 00:14:18.738 "state": "completed", 00:14:18.738 "digest": "sha512", 00:14:18.738 "dhgroup": "ffdhe3072" 00:14:18.738 } 00:14:18.738 } 00:14:18.738 ]' 00:14:18.738 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.996 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.996 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.996 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.996 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.996 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.996 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.996 11:53:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.254 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:19.254 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:19.820 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.820 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:19.820 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.820 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.820 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.820 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.820 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:19.820 11:53:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.079 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.337 00:14:20.337 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.337 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.337 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.595 { 00:14:20.595 "cntlid": 117, 00:14:20.595 "qid": 0, 00:14:20.595 "state": "enabled", 00:14:20.595 "thread": "nvmf_tgt_poll_group_000", 00:14:20.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:20.595 "listen_address": { 00:14:20.595 "trtype": "RDMA", 00:14:20.595 "adrfam": "IPv4", 00:14:20.595 "traddr": "192.168.100.8", 00:14:20.595 "trsvcid": "4420" 00:14:20.595 }, 00:14:20.595 "peer_address": { 00:14:20.595 "trtype": "RDMA", 00:14:20.595 "adrfam": "IPv4", 00:14:20.595 "traddr": "192.168.100.8", 00:14:20.595 "trsvcid": "51637" 00:14:20.595 }, 00:14:20.595 "auth": { 00:14:20.595 "state": "completed", 00:14:20.595 "digest": "sha512", 00:14:20.595 "dhgroup": "ffdhe3072" 00:14:20.595 } 00:14:20.595 } 00:14:20.595 ]' 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.595 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.853 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:20.853 11:53:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.790 11:53:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.048 00:14:22.048 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.048 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.048 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.307 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.307 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.307 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.307 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.307 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.307 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.307 { 00:14:22.307 "cntlid": 119, 00:14:22.307 "qid": 0, 00:14:22.307 "state": "enabled", 00:14:22.307 "thread": "nvmf_tgt_poll_group_000", 00:14:22.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:22.307 "listen_address": { 00:14:22.307 "trtype": "RDMA", 00:14:22.307 "adrfam": "IPv4", 00:14:22.307 "traddr": "192.168.100.8", 00:14:22.307 "trsvcid": "4420" 00:14:22.307 }, 00:14:22.307 "peer_address": { 00:14:22.307 "trtype": "RDMA", 00:14:22.307 "adrfam": "IPv4", 00:14:22.307 "traddr": "192.168.100.8", 00:14:22.307 "trsvcid": "47965" 00:14:22.307 }, 00:14:22.307 "auth": { 00:14:22.307 "state": "completed", 00:14:22.307 "digest": "sha512", 00:14:22.307 "dhgroup": "ffdhe3072" 00:14:22.307 } 00:14:22.307 } 00:14:22.307 ]' 00:14:22.307 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.307 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.307 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.566 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.566 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.566 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.566 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.566 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.566 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:22.566 11:53:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.504 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.767 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.767 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.767 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.767 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.033 00:14:24.033 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.033 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.033 11:53:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.033 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.033 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.033 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.033 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.033 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.033 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.033 { 00:14:24.033 "cntlid": 121, 00:14:24.033 "qid": 0, 00:14:24.033 "state": "enabled", 00:14:24.033 "thread": "nvmf_tgt_poll_group_000", 00:14:24.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:24.033 "listen_address": { 00:14:24.033 "trtype": "RDMA", 00:14:24.033 "adrfam": "IPv4", 00:14:24.033 "traddr": "192.168.100.8", 00:14:24.033 "trsvcid": "4420" 00:14:24.033 }, 00:14:24.033 "peer_address": { 00:14:24.033 "trtype": "RDMA", 00:14:24.033 "adrfam": "IPv4", 00:14:24.033 "traddr": "192.168.100.8", 00:14:24.033 "trsvcid": "51745" 00:14:24.033 }, 00:14:24.033 "auth": { 00:14:24.033 "state": "completed", 00:14:24.033 "digest": "sha512", 00:14:24.033 "dhgroup": "ffdhe4096" 00:14:24.033 } 00:14:24.033 } 00:14:24.033 ]' 00:14:24.033 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.318 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.318 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.318 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:24.318 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.318 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.318 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.318 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.622 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:24.622 11:53:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:25.226 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.226 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:25.226 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.226 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.226 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.226 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.226 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:25.226 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.510 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.788 00:14:25.788 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.788 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.788 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.788 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.788 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.788 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.788 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.118 { 00:14:26.118 "cntlid": 123, 00:14:26.118 "qid": 0, 00:14:26.118 "state": "enabled", 00:14:26.118 "thread": "nvmf_tgt_poll_group_000", 00:14:26.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:26.118 "listen_address": { 00:14:26.118 "trtype": "RDMA", 00:14:26.118 "adrfam": "IPv4", 00:14:26.118 "traddr": "192.168.100.8", 00:14:26.118 "trsvcid": "4420" 00:14:26.118 }, 00:14:26.118 "peer_address": { 00:14:26.118 "trtype": "RDMA", 00:14:26.118 "adrfam": "IPv4", 00:14:26.118 "traddr": "192.168.100.8", 00:14:26.118 "trsvcid": "33423" 00:14:26.118 }, 00:14:26.118 "auth": { 00:14:26.118 "state": "completed", 00:14:26.118 "digest": "sha512", 00:14:26.118 "dhgroup": "ffdhe4096" 00:14:26.118 } 00:14:26.118 } 00:14:26.118 ]' 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.118 11:53:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.382 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:26.382 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:26.947 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.947 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:26.947 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.947 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.947 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.947 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.948 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:26.948 11:53:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.206 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.464 00:14:27.464 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.464 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.464 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.722 { 00:14:27.722 "cntlid": 125, 00:14:27.722 "qid": 0, 00:14:27.722 "state": "enabled", 00:14:27.722 "thread": "nvmf_tgt_poll_group_000", 00:14:27.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:27.722 "listen_address": { 00:14:27.722 "trtype": "RDMA", 00:14:27.722 "adrfam": "IPv4", 00:14:27.722 "traddr": "192.168.100.8", 00:14:27.722 "trsvcid": "4420" 00:14:27.722 }, 00:14:27.722 "peer_address": { 00:14:27.722 "trtype": "RDMA", 00:14:27.722 "adrfam": "IPv4", 00:14:27.722 "traddr": "192.168.100.8", 00:14:27.722 "trsvcid": "40587" 00:14:27.722 }, 00:14:27.722 "auth": { 00:14:27.722 "state": "completed", 00:14:27.722 "digest": "sha512", 00:14:27.722 "dhgroup": "ffdhe4096" 00:14:27.722 } 00:14:27.722 } 00:14:27.722 ]' 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.722 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.981 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:27.981 11:53:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:28.915 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:28.916 11:53:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.175 00:14:29.433 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.434 { 00:14:29.434 "cntlid": 127, 00:14:29.434 "qid": 0, 00:14:29.434 "state": "enabled", 00:14:29.434 "thread": "nvmf_tgt_poll_group_000", 00:14:29.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:29.434 "listen_address": { 00:14:29.434 "trtype": "RDMA", 00:14:29.434 "adrfam": "IPv4", 00:14:29.434 "traddr": "192.168.100.8", 00:14:29.434 "trsvcid": "4420" 00:14:29.434 }, 00:14:29.434 "peer_address": { 00:14:29.434 "trtype": "RDMA", 00:14:29.434 "adrfam": "IPv4", 00:14:29.434 "traddr": "192.168.100.8", 00:14:29.434 "trsvcid": "48927" 00:14:29.434 }, 00:14:29.434 "auth": { 00:14:29.434 "state": "completed", 00:14:29.434 "digest": "sha512", 00:14:29.434 "dhgroup": "ffdhe4096" 00:14:29.434 } 00:14:29.434 } 00:14:29.434 ]' 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.434 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.692 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:29.692 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.692 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.692 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.692 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.950 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:29.950 11:53:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:30.517 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.517 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:30.517 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.517 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.517 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.517 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.517 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.517 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:30.517 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.776 11:53:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.035 00:14:31.035 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.035 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.035 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.293 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.293 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.293 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.293 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.293 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.293 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.293 { 00:14:31.293 "cntlid": 129, 00:14:31.293 "qid": 0, 00:14:31.293 "state": "enabled", 00:14:31.293 "thread": "nvmf_tgt_poll_group_000", 00:14:31.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:31.293 "listen_address": { 00:14:31.293 "trtype": "RDMA", 00:14:31.293 "adrfam": "IPv4", 00:14:31.293 "traddr": "192.168.100.8", 00:14:31.293 "trsvcid": "4420" 00:14:31.293 }, 00:14:31.293 "peer_address": { 00:14:31.293 "trtype": "RDMA", 00:14:31.293 "adrfam": "IPv4", 00:14:31.293 "traddr": "192.168.100.8", 00:14:31.293 "trsvcid": "51396" 00:14:31.293 }, 00:14:31.293 "auth": { 00:14:31.293 "state": "completed", 00:14:31.293 "digest": "sha512", 00:14:31.293 "dhgroup": "ffdhe6144" 00:14:31.293 } 00:14:31.293 } 00:14:31.293 ]' 00:14:31.293 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.293 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.293 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.552 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:31.552 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.552 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.552 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.552 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.810 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:31.810 11:53:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:32.377 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.377 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:32.377 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.377 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.377 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.377 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.377 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:32.377 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.635 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.894 00:14:32.894 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.894 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.894 11:53:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.153 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.153 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.153 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.153 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.153 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.153 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.153 { 00:14:33.153 "cntlid": 131, 00:14:33.153 "qid": 0, 00:14:33.153 "state": "enabled", 00:14:33.153 "thread": "nvmf_tgt_poll_group_000", 00:14:33.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:33.153 "listen_address": { 00:14:33.153 "trtype": "RDMA", 00:14:33.153 "adrfam": "IPv4", 00:14:33.153 "traddr": "192.168.100.8", 00:14:33.153 "trsvcid": "4420" 00:14:33.153 }, 00:14:33.153 "peer_address": { 00:14:33.153 "trtype": "RDMA", 00:14:33.153 "adrfam": "IPv4", 00:14:33.153 "traddr": "192.168.100.8", 00:14:33.153 "trsvcid": "41672" 00:14:33.153 }, 00:14:33.153 "auth": { 00:14:33.153 "state": "completed", 00:14:33.153 "digest": "sha512", 00:14:33.153 "dhgroup": "ffdhe6144" 00:14:33.153 } 00:14:33.153 } 00:14:33.153 ]' 00:14:33.153 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.153 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.153 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.412 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:33.412 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.412 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.412 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.412 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.672 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:33.672 11:53:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:34.241 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.241 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:34.241 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.241 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.241 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.241 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.241 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:34.241 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.501 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.761 00:14:34.761 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.761 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.761 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.023 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.023 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.023 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.023 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.023 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.023 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.023 { 00:14:35.023 "cntlid": 133, 00:14:35.023 "qid": 0, 00:14:35.023 "state": "enabled", 00:14:35.023 "thread": "nvmf_tgt_poll_group_000", 00:14:35.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:35.023 "listen_address": { 00:14:35.023 "trtype": "RDMA", 00:14:35.023 "adrfam": "IPv4", 00:14:35.023 "traddr": "192.168.100.8", 00:14:35.023 "trsvcid": "4420" 00:14:35.023 }, 00:14:35.023 "peer_address": { 00:14:35.023 "trtype": "RDMA", 00:14:35.023 "adrfam": "IPv4", 00:14:35.023 "traddr": "192.168.100.8", 00:14:35.023 "trsvcid": "35763" 00:14:35.023 }, 00:14:35.023 "auth": { 00:14:35.023 "state": "completed", 00:14:35.023 "digest": "sha512", 00:14:35.023 "dhgroup": "ffdhe6144" 00:14:35.023 } 00:14:35.023 } 00:14:35.023 ]' 00:14:35.023 11:53:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.023 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.023 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.282 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:35.282 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.282 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.282 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.282 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.541 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:35.541 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:36.109 11:53:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.109 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:36.109 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.109 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.109 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.109 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.109 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:36.109 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.368 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.627 00:14:36.627 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.627 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.627 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.885 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.886 { 00:14:36.886 "cntlid": 135, 00:14:36.886 "qid": 0, 00:14:36.886 "state": "enabled", 00:14:36.886 "thread": "nvmf_tgt_poll_group_000", 00:14:36.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:36.886 "listen_address": { 00:14:36.886 "trtype": "RDMA", 00:14:36.886 "adrfam": "IPv4", 00:14:36.886 "traddr": "192.168.100.8", 00:14:36.886 "trsvcid": "4420" 00:14:36.886 }, 00:14:36.886 "peer_address": { 00:14:36.886 "trtype": "RDMA", 00:14:36.886 "adrfam": "IPv4", 00:14:36.886 "traddr": "192.168.100.8", 00:14:36.886 "trsvcid": "57696" 00:14:36.886 }, 00:14:36.886 "auth": { 00:14:36.886 "state": "completed", 00:14:36.886 "digest": "sha512", 00:14:36.886 "dhgroup": "ffdhe6144" 00:14:36.886 } 00:14:36.886 } 00:14:36.886 ]' 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:36.886 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.144 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.144 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.144 11:53:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.144 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:37.144 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:38.081 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.081 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:38.081 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.081 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.081 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.081 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.081 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.081 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:38.081 11:53:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.081 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.649 00:14:38.649 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.649 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.649 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.908 { 00:14:38.908 "cntlid": 137, 00:14:38.908 "qid": 0, 00:14:38.908 "state": "enabled", 00:14:38.908 "thread": "nvmf_tgt_poll_group_000", 00:14:38.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:38.908 "listen_address": { 00:14:38.908 "trtype": "RDMA", 00:14:38.908 "adrfam": "IPv4", 00:14:38.908 "traddr": "192.168.100.8", 00:14:38.908 "trsvcid": "4420" 00:14:38.908 }, 00:14:38.908 "peer_address": { 00:14:38.908 "trtype": "RDMA", 00:14:38.908 "adrfam": "IPv4", 00:14:38.908 "traddr": "192.168.100.8", 00:14:38.908 "trsvcid": "58140" 00:14:38.908 }, 00:14:38.908 "auth": { 00:14:38.908 "state": "completed", 00:14:38.908 "digest": "sha512", 00:14:38.908 "dhgroup": "ffdhe8192" 00:14:38.908 } 00:14:38.908 } 00:14:38.908 ]' 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.908 11:53:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.167 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:39.167 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:40.103 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.103 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:40.104 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.104 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.104 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.104 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.104 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:40.104 11:53:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.104 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.671 00:14:40.671 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.671 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.671 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.930 { 00:14:40.930 "cntlid": 139, 00:14:40.930 "qid": 0, 00:14:40.930 "state": "enabled", 00:14:40.930 "thread": "nvmf_tgt_poll_group_000", 00:14:40.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:40.930 "listen_address": { 00:14:40.930 "trtype": "RDMA", 00:14:40.930 "adrfam": "IPv4", 00:14:40.930 "traddr": "192.168.100.8", 00:14:40.930 "trsvcid": "4420" 00:14:40.930 }, 00:14:40.930 "peer_address": { 00:14:40.930 "trtype": "RDMA", 00:14:40.930 "adrfam": "IPv4", 00:14:40.930 "traddr": "192.168.100.8", 00:14:40.930 "trsvcid": "40415" 00:14:40.930 }, 00:14:40.930 "auth": { 00:14:40.930 "state": "completed", 00:14:40.930 "digest": "sha512", 00:14:40.930 "dhgroup": "ffdhe8192" 00:14:40.930 } 00:14:40.930 } 00:14:40.930 ]' 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.930 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.931 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.931 11:53:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.189 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:41.189 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: --dhchap-ctrl-secret DHHC-1:02:Yzg2ZjQ5NDk2ZTc5NWUwYjg5YTgyMGExYTZkODc5ZTk1YzA5ZDc1YzI5MzlmMDRkE4QdHQ==: 00:14:41.756 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.015 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:42.015 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.015 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.015 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.015 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.015 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:42.015 11:53:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:42.015 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:42.015 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.015 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:42.015 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:42.015 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:42.015 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.015 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.015 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.015 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.273 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.273 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.273 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.273 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.531 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.789 { 00:14:42.789 "cntlid": 141, 00:14:42.789 "qid": 0, 00:14:42.789 "state": "enabled", 00:14:42.789 "thread": "nvmf_tgt_poll_group_000", 00:14:42.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:42.789 "listen_address": { 00:14:42.789 "trtype": "RDMA", 00:14:42.789 "adrfam": "IPv4", 00:14:42.789 "traddr": "192.168.100.8", 00:14:42.789 "trsvcid": "4420" 00:14:42.789 }, 00:14:42.789 "peer_address": { 00:14:42.789 "trtype": "RDMA", 00:14:42.789 "adrfam": "IPv4", 00:14:42.789 "traddr": "192.168.100.8", 00:14:42.789 "trsvcid": "57138" 00:14:42.789 }, 00:14:42.789 "auth": { 00:14:42.789 "state": "completed", 00:14:42.789 "digest": "sha512", 00:14:42.789 "dhgroup": "ffdhe8192" 00:14:42.789 } 00:14:42.789 } 00:14:42.789 ]' 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.789 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.047 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:43.047 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.047 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.047 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.047 11:53:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.305 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:43.305 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:01:NWUzZGQzNTZkZDIwZGI2MzJjNzEwZDYxOWI2YjcwZmMt4jNK: 00:14:43.871 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.871 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:43.871 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.871 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.871 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.871 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.871 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:43.871 11:53:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.130 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.695 00:14:44.695 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.695 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.695 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.953 { 00:14:44.953 "cntlid": 143, 00:14:44.953 "qid": 0, 00:14:44.953 "state": "enabled", 00:14:44.953 "thread": "nvmf_tgt_poll_group_000", 00:14:44.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:44.953 "listen_address": { 00:14:44.953 "trtype": "RDMA", 00:14:44.953 "adrfam": "IPv4", 00:14:44.953 "traddr": "192.168.100.8", 00:14:44.953 "trsvcid": "4420" 00:14:44.953 }, 00:14:44.953 "peer_address": { 00:14:44.953 "trtype": "RDMA", 00:14:44.953 "adrfam": "IPv4", 00:14:44.953 "traddr": "192.168.100.8", 00:14:44.953 "trsvcid": "51413" 00:14:44.953 }, 00:14:44.953 "auth": { 00:14:44.953 "state": "completed", 00:14:44.953 "digest": "sha512", 00:14:44.953 "dhgroup": "ffdhe8192" 00:14:44.953 } 00:14:44.953 } 00:14:44.953 ]' 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.953 11:53:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.211 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:45.211 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:45.775 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:46.033 11:53:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:46.033 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:46.033 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.033 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:46.033 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:46.033 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:46.033 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.033 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.033 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.033 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.291 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.291 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.291 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.291 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.547 00:14:46.547 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.547 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.547 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.803 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.803 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.803 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.803 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.803 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.803 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.803 { 00:14:46.803 "cntlid": 145, 00:14:46.803 "qid": 0, 00:14:46.803 "state": "enabled", 00:14:46.803 "thread": "nvmf_tgt_poll_group_000", 00:14:46.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:46.803 "listen_address": { 00:14:46.803 "trtype": "RDMA", 00:14:46.803 "adrfam": "IPv4", 00:14:46.803 "traddr": "192.168.100.8", 00:14:46.803 "trsvcid": "4420" 00:14:46.803 }, 00:14:46.803 "peer_address": { 00:14:46.803 "trtype": "RDMA", 00:14:46.803 "adrfam": "IPv4", 00:14:46.803 "traddr": "192.168.100.8", 00:14:46.803 "trsvcid": "35306" 00:14:46.803 }, 00:14:46.803 "auth": { 00:14:46.803 "state": "completed", 00:14:46.803 "digest": "sha512", 00:14:46.803 "dhgroup": "ffdhe8192" 00:14:46.803 } 00:14:46.803 } 00:14:46.803 ]' 00:14:46.803 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.803 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.803 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.060 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:47.060 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.060 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.060 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.060 11:53:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.317 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:47.317 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2UwODRjYmJiMzUzZDM1NmQ1MDc0MTY5ZGFmMmI5NWM0ZTg2ZGYwOTZhNjVmNjU0Eso4tg==: --dhchap-ctrl-secret DHHC-1:03:OTQ2ZmQ5ZjM1ODk3MWRmZmVlZmJiYmU2OWJkODI1MjYwMjQyOTIwNWRmZGYzYWNkOTVmNjQyY2ZlZmVlNjJiZrYyKXc=: 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:47.884 11:53:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:48.450 request: 00:14:48.450 { 00:14:48.450 "name": "nvme0", 00:14:48.450 "trtype": "rdma", 00:14:48.450 "traddr": "192.168.100.8", 00:14:48.450 "adrfam": "ipv4", 00:14:48.450 "trsvcid": "4420", 00:14:48.450 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:48.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:48.450 "prchk_reftag": false, 00:14:48.450 "prchk_guard": false, 00:14:48.450 "hdgst": false, 00:14:48.450 "ddgst": false, 00:14:48.450 "dhchap_key": "key2", 00:14:48.450 "allow_unrecognized_csi": false, 00:14:48.450 "method": "bdev_nvme_attach_controller", 00:14:48.450 "req_id": 1 00:14:48.450 } 00:14:48.450 Got JSON-RPC error response 00:14:48.450 response: 00:14:48.450 { 00:14:48.450 "code": -5, 00:14:48.450 "message": "Input/output error" 00:14:48.450 } 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:48.450 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:49.017 request: 00:14:49.017 { 00:14:49.017 "name": "nvme0", 00:14:49.017 "trtype": "rdma", 00:14:49.017 "traddr": "192.168.100.8", 00:14:49.017 "adrfam": "ipv4", 00:14:49.017 "trsvcid": "4420", 00:14:49.017 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:49.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:49.017 "prchk_reftag": false, 00:14:49.017 "prchk_guard": false, 00:14:49.017 "hdgst": false, 00:14:49.017 "ddgst": false, 00:14:49.017 "dhchap_key": "key1", 00:14:49.017 "dhchap_ctrlr_key": "ckey2", 00:14:49.017 "allow_unrecognized_csi": false, 00:14:49.017 "method": "bdev_nvme_attach_controller", 00:14:49.017 "req_id": 1 00:14:49.017 } 00:14:49.017 Got JSON-RPC error response 00:14:49.017 response: 00:14:49.017 { 00:14:49.017 "code": -5, 00:14:49.017 "message": "Input/output error" 00:14:49.017 } 00:14:49.017 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:49.017 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.017 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.017 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.017 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:49.017 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.017 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.017 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.017 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.018 11:53:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.585 request: 00:14:49.585 { 00:14:49.585 "name": "nvme0", 00:14:49.585 "trtype": "rdma", 00:14:49.585 "traddr": "192.168.100.8", 00:14:49.585 "adrfam": "ipv4", 00:14:49.585 "trsvcid": "4420", 00:14:49.585 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:49.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:49.585 "prchk_reftag": false, 00:14:49.585 "prchk_guard": false, 00:14:49.585 "hdgst": false, 00:14:49.585 "ddgst": false, 00:14:49.585 "dhchap_key": "key1", 00:14:49.585 "dhchap_ctrlr_key": "ckey1", 00:14:49.585 "allow_unrecognized_csi": false, 00:14:49.585 "method": "bdev_nvme_attach_controller", 00:14:49.585 "req_id": 1 00:14:49.585 } 00:14:49.585 Got JSON-RPC error response 00:14:49.585 response: 00:14:49.585 { 00:14:49.585 "code": -5, 00:14:49.585 "message": "Input/output error" 00:14:49.585 } 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3195255 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3195255 ']' 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3195255 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195255 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195255' 00:14:49.585 killing process with pid 3195255 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3195255 00:14:49.585 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3195255 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3219168 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3219168 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3219168 ']' 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:49.844 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.102 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.102 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:50.102 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3219168 00:14:50.102 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3219168 ']' 00:14:50.102 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.102 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.102 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.102 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.102 11:53:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.102 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.102 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:50.102 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:50.102 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.102 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.361 null0 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.p6B 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.XBW ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XBW 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AXM 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Fol ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fol 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.TcP 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.iJa ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iJa 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Crx 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:50.361 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.362 11:53:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.297 nvme0n1 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.297 { 00:14:51.297 "cntlid": 1, 00:14:51.297 "qid": 0, 00:14:51.297 "state": "enabled", 00:14:51.297 "thread": "nvmf_tgt_poll_group_000", 00:14:51.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:51.297 "listen_address": { 00:14:51.297 "trtype": "RDMA", 00:14:51.297 "adrfam": "IPv4", 00:14:51.297 "traddr": "192.168.100.8", 00:14:51.297 "trsvcid": "4420" 00:14:51.297 }, 00:14:51.297 "peer_address": { 00:14:51.297 "trtype": "RDMA", 00:14:51.297 "adrfam": "IPv4", 00:14:51.297 "traddr": "192.168.100.8", 00:14:51.297 "trsvcid": "38421" 00:14:51.297 }, 00:14:51.297 "auth": { 00:14:51.297 "state": "completed", 00:14:51.297 "digest": "sha512", 00:14:51.297 "dhgroup": "ffdhe8192" 00:14:51.297 } 00:14:51.297 } 00:14:51.297 ]' 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.297 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.555 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.555 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.555 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.555 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.555 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.813 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:51.813 11:53:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:52.380 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.639 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.898 request: 00:14:52.898 { 00:14:52.898 "name": "nvme0", 00:14:52.898 "trtype": "rdma", 00:14:52.898 "traddr": "192.168.100.8", 00:14:52.898 "adrfam": "ipv4", 00:14:52.898 "trsvcid": "4420", 00:14:52.898 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:52.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:52.898 "prchk_reftag": false, 00:14:52.898 "prchk_guard": false, 00:14:52.898 "hdgst": false, 00:14:52.898 "ddgst": false, 00:14:52.898 "dhchap_key": "key3", 00:14:52.898 "allow_unrecognized_csi": false, 00:14:52.898 "method": "bdev_nvme_attach_controller", 00:14:52.898 "req_id": 1 00:14:52.898 } 00:14:52.898 Got JSON-RPC error response 00:14:52.898 response: 00:14:52.898 { 00:14:52.898 "code": -5, 00:14:52.898 "message": "Input/output error" 00:14:52.898 } 00:14:52.898 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:52.898 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.898 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.898 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.898 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:14:52.898 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:14:52.898 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:52.898 11:54:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.157 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.415 request: 00:14:53.415 { 00:14:53.415 "name": "nvme0", 00:14:53.415 "trtype": "rdma", 00:14:53.415 "traddr": "192.168.100.8", 00:14:53.415 "adrfam": "ipv4", 00:14:53.415 "trsvcid": "4420", 00:14:53.415 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:53.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:53.415 "prchk_reftag": false, 00:14:53.415 "prchk_guard": false, 00:14:53.415 "hdgst": false, 00:14:53.415 "ddgst": false, 00:14:53.415 "dhchap_key": "key3", 00:14:53.415 "allow_unrecognized_csi": false, 00:14:53.415 "method": "bdev_nvme_attach_controller", 00:14:53.415 "req_id": 1 00:14:53.415 } 00:14:53.415 Got JSON-RPC error response 00:14:53.415 response: 00:14:53.415 { 00:14:53.415 "code": -5, 00:14:53.415 "message": "Input/output error" 00:14:53.415 } 00:14:53.415 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:53.415 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.415 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.415 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.415 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:53.415 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:14:53.415 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:53.415 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:53.415 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:53.416 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:53.416 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:53.416 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.416 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.416 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.416 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:53.416 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.416 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:53.674 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:53.933 request: 00:14:53.933 { 00:14:53.933 "name": "nvme0", 00:14:53.933 "trtype": "rdma", 00:14:53.933 "traddr": "192.168.100.8", 00:14:53.933 "adrfam": "ipv4", 00:14:53.933 "trsvcid": "4420", 00:14:53.933 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:53.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:53.933 "prchk_reftag": false, 00:14:53.933 "prchk_guard": false, 00:14:53.933 "hdgst": false, 00:14:53.933 "ddgst": false, 00:14:53.933 "dhchap_key": "key0", 00:14:53.933 "dhchap_ctrlr_key": "key1", 00:14:53.933 "allow_unrecognized_csi": false, 00:14:53.933 "method": "bdev_nvme_attach_controller", 00:14:53.933 "req_id": 1 00:14:53.933 } 00:14:53.933 Got JSON-RPC error response 00:14:53.933 response: 00:14:53.933 { 00:14:53.933 "code": -5, 00:14:53.933 "message": "Input/output error" 00:14:53.933 } 00:14:53.933 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:53.933 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.933 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.933 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.933 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:14:53.933 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:53.933 11:54:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:54.191 nvme0n1 00:14:54.191 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:14:54.191 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:54.191 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.450 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.450 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.450 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.708 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:14:54.708 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.708 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.708 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.708 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:54.708 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:54.708 11:54:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:55.275 nvme0n1 00:14:55.275 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:55.275 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:55.275 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.533 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.533 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:55.533 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.533 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.533 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.533 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:55.533 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:55.533 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.792 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.792 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:55.792 11:54:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: --dhchap-ctrl-secret DHHC-1:03:YTRmNDRhMTE3YWNmM2MyMjYyNjQzNGI4M2NhMDY1OGY3MGIwOWQwYzc4MjcxYjMwM2RjNGNlNjZmNWU1Y2ZmNSXxYf8=: 00:14:56.358 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:56.358 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:56.358 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:56.359 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:56.359 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:56.359 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:56.359 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:56.359 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.359 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:56.617 11:54:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:57.183 request: 00:14:57.183 { 00:14:57.183 "name": "nvme0", 00:14:57.183 "trtype": "rdma", 00:14:57.183 "traddr": "192.168.100.8", 00:14:57.183 "adrfam": "ipv4", 00:14:57.183 "trsvcid": "4420", 00:14:57.183 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:57.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:57.183 "prchk_reftag": false, 00:14:57.183 "prchk_guard": false, 00:14:57.183 "hdgst": false, 00:14:57.183 "ddgst": false, 00:14:57.183 "dhchap_key": "key1", 00:14:57.183 "allow_unrecognized_csi": false, 00:14:57.183 "method": "bdev_nvme_attach_controller", 00:14:57.183 "req_id": 1 00:14:57.183 } 00:14:57.183 Got JSON-RPC error response 00:14:57.183 response: 00:14:57.183 { 00:14:57.183 "code": -5, 00:14:57.183 "message": "Input/output error" 00:14:57.183 } 00:14:57.183 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:57.183 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.184 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.184 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.184 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:57.184 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:57.184 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:57.749 nvme0n1 00:14:57.749 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:57.749 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:57.749 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.008 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.008 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.008 11:54:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.266 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:58.266 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.266 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.266 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.266 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:58.266 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:58.266 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:58.525 nvme0n1 00:14:58.525 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:58.525 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.525 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:58.784 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.784 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.784 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: '' 2s 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: ]] 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmI0YTRhZGE3ODNkZWJlZDc5Njk3Njc3YjY1NTgxOTGBSFHR: 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:59.043 11:54:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: 2s 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: ]] 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZGMxOTBmNDJlY2ZmNWJlY2JjM2M1M2VhYzMzOTYwZjk5YmY3NjM0MjQ2NmFlNDA5FbID8Q==: 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:00.945 11:54:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:03.474 11:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:03.474 11:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:03.474 11:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:03.475 11:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:03.475 11:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:03.475 11:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:03.475 11:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:03.475 11:54:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.475 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:03.475 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.475 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.475 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.475 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:03.475 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:03.475 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:04.040 nvme0n1 00:15:04.040 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:04.040 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.040 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.040 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.040 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:04.040 11:54:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:04.608 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:04.867 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:04.867 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:04.867 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.126 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.126 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:05.126 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.126 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.126 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.126 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:05.126 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:05.126 11:54:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:05.126 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:05.126 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.126 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:05.126 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.126 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:05.126 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:05.384 request: 00:15:05.384 { 00:15:05.384 "name": "nvme0", 00:15:05.384 "dhchap_key": "key1", 00:15:05.384 "dhchap_ctrlr_key": "key3", 00:15:05.384 "method": "bdev_nvme_set_keys", 00:15:05.384 "req_id": 1 00:15:05.384 } 00:15:05.384 Got JSON-RPC error response 00:15:05.384 response: 00:15:05.384 { 00:15:05.384 "code": -13, 00:15:05.384 "message": "Permission denied" 00:15:05.384 } 00:15:05.643 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:05.643 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.643 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.643 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.643 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:05.643 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.643 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:05.643 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:05.643 11:54:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:07.019 11:54:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:07.584 nvme0n1 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:07.584 11:54:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:08.151 request: 00:15:08.151 { 00:15:08.151 "name": "nvme0", 00:15:08.151 "dhchap_key": "key2", 00:15:08.151 "dhchap_ctrlr_key": "key0", 00:15:08.151 "method": "bdev_nvme_set_keys", 00:15:08.151 "req_id": 1 00:15:08.151 } 00:15:08.151 Got JSON-RPC error response 00:15:08.151 response: 00:15:08.151 { 00:15:08.151 "code": -13, 00:15:08.151 "message": "Permission denied" 00:15:08.151 } 00:15:08.151 11:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:08.151 11:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.151 11:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.151 11:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.151 11:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:08.151 11:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:08.151 11:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.410 11:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:08.410 11:54:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:09.345 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:09.345 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:09.345 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3195408 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3195408 ']' 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3195408 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195408 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195408' 00:15:09.603 killing process with pid 3195408 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3195408 00:15:09.603 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3195408 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:09.862 rmmod nvme_rdma 00:15:09.862 rmmod nvme_fabrics 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3219168 ']' 00:15:09.862 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3219168 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3219168 ']' 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3219168 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3219168 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3219168' 00:15:10.120 killing process with pid 3219168 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3219168 00:15:10.120 11:54:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3219168 00:15:10.120 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:10.120 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:10.120 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.p6B /tmp/spdk.key-sha256.AXM /tmp/spdk.key-sha384.TcP /tmp/spdk.key-sha512.Crx /tmp/spdk.key-sha512.XBW /tmp/spdk.key-sha384.Fol /tmp/spdk.key-sha256.iJa '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:15:10.120 00:15:10.120 real 2m43.717s 00:15:10.120 user 6m20.426s 00:15:10.120 sys 0m20.546s 00:15:10.120 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.120 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.120 ************************************ 00:15:10.120 END TEST nvmf_auth_target 00:15:10.120 ************************************ 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:10.379 ************************************ 00:15:10.379 START TEST nvmf_srq_overwhelm 00:15:10.379 ************************************ 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:10.379 * Looking for test storage... 00:15:10.379 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lcov --version 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:10.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.379 --rc genhtml_branch_coverage=1 00:15:10.379 --rc genhtml_function_coverage=1 00:15:10.379 --rc genhtml_legend=1 00:15:10.379 --rc geninfo_all_blocks=1 00:15:10.379 --rc geninfo_unexecuted_blocks=1 00:15:10.379 00:15:10.379 ' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:10.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.379 --rc genhtml_branch_coverage=1 00:15:10.379 --rc genhtml_function_coverage=1 00:15:10.379 --rc genhtml_legend=1 00:15:10.379 --rc geninfo_all_blocks=1 00:15:10.379 --rc geninfo_unexecuted_blocks=1 00:15:10.379 00:15:10.379 ' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:10.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.379 --rc genhtml_branch_coverage=1 00:15:10.379 --rc genhtml_function_coverage=1 00:15:10.379 --rc genhtml_legend=1 00:15:10.379 --rc geninfo_all_blocks=1 00:15:10.379 --rc geninfo_unexecuted_blocks=1 00:15:10.379 00:15:10.379 ' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:10.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.379 --rc genhtml_branch_coverage=1 00:15:10.379 --rc genhtml_function_coverage=1 00:15:10.379 --rc genhtml_legend=1 00:15:10.379 --rc geninfo_all_blocks=1 00:15:10.379 --rc geninfo_unexecuted_blocks=1 00:15:10.379 00:15:10.379 ' 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.379 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.380 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:10.639 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:15:10.639 11:54:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:15:17.206 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:15:17.206 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:17.206 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:15:17.207 Found net devices under 0000:da:00.0: mlx_0_0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:15:17.207 Found net devices under 0000:da:00.1: mlx_0_1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:17.207 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.207 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:15:17.207 altname enp218s0f0np0 00:15:17.207 altname ens818f0np0 00:15:17.207 inet 192.168.100.8/24 scope global mlx_0_0 00:15:17.207 valid_lft forever preferred_lft forever 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:17.207 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.207 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:15:17.207 altname enp218s0f1np1 00:15:17.207 altname ens818f1np1 00:15:17.207 inet 192.168.100.9/24 scope global mlx_0_1 00:15:17.207 valid_lft forever preferred_lft forever 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:17.207 192.168.100.9' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:17.207 192.168.100.9' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:17.207 192.168.100.9' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=3226328 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 3226328 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 3226328 ']' 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.207 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.207 [2024-12-09 11:54:24.296842] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:15:17.208 [2024-12-09 11:54:24.296887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.208 [2024-12-09 11:54:24.355696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.208 [2024-12-09 11:54:24.399697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.208 [2024-12-09 11:54:24.399734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.208 [2024-12-09 11:54:24.399741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.208 [2024-12-09 11:54:24.399748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.208 [2024-12-09 11:54:24.399753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.208 [2024-12-09 11:54:24.401291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.208 [2024-12-09 11:54:24.401403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.208 [2024-12-09 11:54:24.401510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.208 [2024-12-09 11:54:24.401511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.208 [2024-12-09 11:54:24.563338] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x241b940/0x241fe30) succeed. 00:15:17.208 [2024-12-09 11:54:24.574972] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x241cfd0/0x24614d0) succeed. 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.208 Malloc0 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.208 [2024-12-09 11:54:24.683154] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.208 11:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:15:17.773 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:15:17.773 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:17.773 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:17.773 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:17.773 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:17.773 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 Malloc1 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.774 11:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.706 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:18.964 Malloc2 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.964 11:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:15:19.897 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:15:19.897 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:19.897 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:19.898 Malloc3 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.898 11:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:20.831 Malloc4 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.831 11:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:22.203 Malloc5 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:22.203 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.204 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:15:22.204 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.204 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:22.204 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.204 11:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:15:23.136 11:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:15:23.136 11:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:23.136 11:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:23.136 11:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:15:23.136 11:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:23.136 11:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:15:23.136 11:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:23.136 11:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:15:23.136 [global] 00:15:23.136 thread=1 00:15:23.136 invalidate=1 00:15:23.136 rw=read 00:15:23.136 time_based=1 00:15:23.136 runtime=10 00:15:23.136 ioengine=libaio 00:15:23.136 direct=1 00:15:23.136 bs=1048576 00:15:23.136 iodepth=128 00:15:23.136 norandommap=1 00:15:23.136 numjobs=13 00:15:23.136 00:15:23.136 [job0] 00:15:23.136 filename=/dev/nvme0n1 00:15:23.136 [job1] 00:15:23.136 filename=/dev/nvme1n1 00:15:23.136 [job2] 00:15:23.136 filename=/dev/nvme2n1 00:15:23.136 [job3] 00:15:23.136 filename=/dev/nvme3n1 00:15:23.136 [job4] 00:15:23.136 filename=/dev/nvme4n1 00:15:23.136 [job5] 00:15:23.136 filename=/dev/nvme5n1 00:15:23.136 Could not set queue depth (nvme0n1) 00:15:23.136 Could not set queue depth (nvme1n1) 00:15:23.136 Could not set queue depth (nvme2n1) 00:15:23.136 Could not set queue depth (nvme3n1) 00:15:23.136 Could not set queue depth (nvme4n1) 00:15:23.136 Could not set queue depth (nvme5n1) 00:15:23.394 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:23.394 ... 00:15:23.394 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:23.394 ... 00:15:23.394 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:23.394 ... 00:15:23.394 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:23.394 ... 00:15:23.394 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:23.394 ... 00:15:23.394 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:23.394 ... 00:15:23.394 fio-3.35 00:15:23.394 Starting 78 threads 00:15:35.594 00:15:35.594 job0: (groupid=0, jobs=1): err= 0: pid=3227739: Mon Dec 9 11:54:42 2024 00:15:35.594 read: IOPS=2, BW=2663KiB/s (2727kB/s)(27.0MiB/10383msec) 00:15:35.594 slat (usec): min=1664, max=2103.3k, avg=382408.16, stdev=788834.25 00:15:35.594 clat (msec): min=56, max=10380, avg=5085.30, stdev=3191.10 00:15:35.594 lat (msec): min=2091, max=10381, avg=5467.71, stdev=3183.99 00:15:35.594 clat percentiles (msec): 00:15:35.594 | 1.00th=[ 57], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2140], 00:15:35.594 | 30.00th=[ 2198], 40.00th=[ 4245], 50.00th=[ 4279], 60.00th=[ 6409], 00:15:35.594 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[10268], 95.00th=[10268], 00:15:35.594 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.594 | 99.99th=[10402] 00:15:35.594 lat (msec) : 100=3.70%, >=2000=96.30% 00:15:35.594 cpu : usr=0.00%, sys=0.16%, ctx=70, majf=0, minf=6913 00:15:35.594 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:15:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.594 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:35.594 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.594 job0: (groupid=0, jobs=1): err= 0: pid=3227740: Mon Dec 9 11:54:42 2024 00:15:35.594 read: IOPS=25, BW=25.2MiB/s (26.4MB/s)(264MiB/10469msec) 00:15:35.594 slat (usec): min=53, max=2120.4k, avg=39470.43, stdev=254150.84 00:15:35.594 clat (msec): min=46, max=9370, avg=4776.68, stdev=3908.88 00:15:35.594 lat (msec): min=783, max=9372, avg=4816.15, stdev=3904.15 00:15:35.594 clat percentiles (msec): 00:15:35.594 | 1.00th=[ 785], 5.00th=[ 785], 10.00th=[ 793], 20.00th=[ 818], 00:15:35.594 | 30.00th=[ 961], 40.00th=[ 1116], 50.00th=[ 2937], 60.00th=[ 8658], 00:15:35.594 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9194], 95.00th=[ 9329], 00:15:35.594 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:15:35.594 | 99.99th=[ 9329] 00:15:35.594 bw ( KiB/s): min= 4096, max=151552, per=1.25%, avg=46421.33, stdev=64927.31, samples=6 00:15:35.594 iops : min= 4, max= 148, avg=45.33, stdev=63.41, samples=6 00:15:35.594 lat (msec) : 50=0.38%, 1000=31.44%, 2000=15.91%, >=2000=52.27% 00:15:35.594 cpu : usr=0.02%, sys=1.06%, ctx=315, majf=0, minf=32769 00:15:35.594 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.1%, 32=12.1%, >=64=76.1% 00:15:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.594 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:15:35.594 issued rwts: total=264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.594 job0: (groupid=0, jobs=1): err= 0: pid=3227741: Mon Dec 9 11:54:42 2024 00:15:35.594 read: IOPS=7, BW=8153KiB/s (8348kB/s)(83.0MiB/10425msec) 00:15:35.594 slat (usec): min=422, max=2111.7k, avg=125062.30, stdev=460761.52 00:15:35.594 clat (msec): min=44, max=10422, avg=7054.87, stdev=2142.32 00:15:35.594 lat (msec): min=2108, max=10424, avg=7179.93, stdev=2028.01 00:15:35.594 clat percentiles (msec): 00:15:35.594 | 1.00th=[ 45], 5.00th=[ 5873], 10.00th=[ 5940], 20.00th=[ 5940], 00:15:35.594 | 30.00th=[ 6074], 40.00th=[ 6074], 50.00th=[ 6208], 60.00th=[ 6208], 00:15:35.594 | 70.00th=[ 6409], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:15:35.594 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.594 | 99.99th=[10402] 00:15:35.594 lat (msec) : 50=1.20%, >=2000=98.80% 00:15:35.594 cpu : usr=0.00%, sys=0.51%, ctx=130, majf=0, minf=21249 00:15:35.594 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:15:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.594 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:35.594 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.594 job0: (groupid=0, jobs=1): err= 0: pid=3227742: Mon Dec 9 11:54:42 2024 00:15:35.594 read: IOPS=29, BW=29.0MiB/s (30.4MB/s)(303MiB/10444msec) 00:15:35.594 slat (usec): min=67, max=4322.0k, avg=34417.25, stdev=295621.24 00:15:35.594 clat (msec): min=13, max=9739, avg=4287.16, stdev=4051.18 00:15:35.595 lat (msec): min=499, max=9745, avg=4321.58, stdev=4054.73 00:15:35.595 clat percentiles (msec): 00:15:35.595 | 1.00th=[ 502], 5.00th=[ 518], 10.00th=[ 523], 20.00th=[ 531], 00:15:35.595 | 30.00th=[ 567], 40.00th=[ 617], 50.00th=[ 1787], 60.00th=[ 7819], 00:15:35.595 | 70.00th=[ 8020], 80.00th=[ 9463], 90.00th=[ 9597], 95.00th=[ 9731], 00:15:35.595 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:15:35.595 | 99.99th=[ 9731] 00:15:35.595 bw ( KiB/s): min= 2048, max=243712, per=1.38%, avg=51200.00, stdev=87753.87, samples=7 00:15:35.595 iops : min= 2, max= 238, avg=50.00, stdev=85.70, samples=7 00:15:35.595 lat (msec) : 20=0.33%, 500=0.66%, 750=41.91%, 2000=12.87%, >=2000=44.22% 00:15:35.595 cpu : usr=0.00%, sys=1.32%, ctx=338, majf=0, minf=32770 00:15:35.595 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.2% 00:15:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.595 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:15:35.595 issued rwts: total=303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.595 job0: (groupid=0, jobs=1): err= 0: pid=3227743: Mon Dec 9 11:54:42 2024 00:15:35.595 read: IOPS=50, BW=50.8MiB/s (53.3MB/s)(527MiB/10366msec) 00:15:35.595 slat (usec): min=43, max=2141.2k, avg=19572.58, stdev=153984.95 00:15:35.595 clat (msec): min=46, max=6483, avg=2383.38, stdev=2046.88 00:15:35.595 lat (msec): min=791, max=6494, avg=2402.95, stdev=2049.54 00:15:35.595 clat percentiles (msec): 00:15:35.595 | 1.00th=[ 793], 5.00th=[ 827], 10.00th=[ 852], 20.00th=[ 911], 00:15:35.595 | 30.00th=[ 927], 40.00th=[ 944], 50.00th=[ 1888], 60.00th=[ 1921], 00:15:35.595 | 70.00th=[ 1938], 80.00th=[ 5805], 90.00th=[ 6074], 95.00th=[ 6275], 00:15:35.595 | 99.00th=[ 6409], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:15:35.595 | 99.99th=[ 6477] 00:15:35.595 bw ( KiB/s): min= 2048, max=163840, per=2.20%, avg=81715.20, stdev=63619.63, samples=10 00:15:35.595 iops : min= 2, max= 160, avg=79.80, stdev=62.13, samples=10 00:15:35.595 lat (msec) : 50=0.19%, 1000=48.96%, 2000=23.91%, >=2000=26.94% 00:15:35.595 cpu : usr=0.01%, sys=1.38%, ctx=445, majf=0, minf=32769 00:15:35.595 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:15:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.595 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.595 issued rwts: total=527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.595 job0: (groupid=0, jobs=1): err= 0: pid=3227744: Mon Dec 9 11:54:42 2024 00:15:35.595 read: IOPS=146, BW=147MiB/s (154MB/s)(1513MiB/10316msec) 00:15:35.595 slat (usec): min=30, max=2106.4k, avg=6775.91, stdev=72927.63 00:15:35.595 clat (msec): min=57, max=5660, avg=831.37, stdev=1280.62 00:15:35.595 lat (msec): min=119, max=5662, avg=838.15, stdev=1286.36 00:15:35.595 clat percentiles (msec): 00:15:35.595 | 1.00th=[ 120], 5.00th=[ 121], 10.00th=[ 121], 20.00th=[ 122], 00:15:35.595 | 30.00th=[ 124], 40.00th=[ 176], 50.00th=[ 426], 60.00th=[ 651], 00:15:35.595 | 70.00th=[ 785], 80.00th=[ 818], 90.00th=[ 1871], 95.00th=[ 4329], 00:15:35.595 | 99.00th=[ 5537], 99.50th=[ 5604], 99.90th=[ 5671], 99.95th=[ 5671], 00:15:35.595 | 99.99th=[ 5671] 00:15:35.595 bw ( KiB/s): min=28672, max=874496, per=6.36%, avg=236258.17, stdev=266835.78, samples=12 00:15:35.595 iops : min= 28, max= 854, avg=230.67, stdev=260.48, samples=12 00:15:35.595 lat (msec) : 100=0.07%, 250=42.96%, 500=10.18%, 750=14.08%, 1000=18.11% 00:15:35.595 lat (msec) : 2000=5.29%, >=2000=9.32% 00:15:35.595 cpu : usr=0.06%, sys=1.85%, ctx=1395, majf=0, minf=32769 00:15:35.595 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:15:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.595 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.595 issued rwts: total=1513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.595 job0: (groupid=0, jobs=1): err= 0: pid=3227745: Mon Dec 9 11:54:42 2024 00:15:35.595 read: IOPS=6, BW=6855KiB/s (7019kB/s)(70.0MiB/10457msec) 00:15:35.595 slat (usec): min=791, max=2109.9k, avg=148583.22, stdev=515998.53 00:15:35.595 clat (msec): min=55, max=10455, avg=8752.59, stdev=2913.31 00:15:35.595 lat (msec): min=2087, max=10456, avg=8901.17, stdev=2722.27 00:15:35.595 clat percentiles (msec): 00:15:35.595 | 1.00th=[ 56], 5.00th=[ 2106], 10.00th=[ 2165], 20.00th=[ 6409], 00:15:35.595 | 30.00th=[ 8658], 40.00th=[10268], 50.00th=[10402], 60.00th=[10402], 00:15:35.595 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:15:35.595 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.595 | 99.99th=[10402] 00:15:35.595 lat (msec) : 100=1.43%, >=2000=98.57% 00:15:35.595 cpu : usr=0.00%, sys=0.59%, ctx=111, majf=0, minf=17921 00:15:35.595 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:15:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.595 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:35.595 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.595 job0: (groupid=0, jobs=1): err= 0: pid=3227746: Mon Dec 9 11:54:42 2024 00:15:35.595 read: IOPS=6, BW=7150KiB/s (7322kB/s)(72.0MiB/10311msec) 00:15:35.595 slat (usec): min=466, max=2106.2k, avg=142439.12, stdev=475230.38 00:15:35.595 clat (msec): min=55, max=10287, avg=7096.42, stdev=3017.35 00:15:35.595 lat (msec): min=2088, max=10310, avg=7238.86, stdev=2920.80 00:15:35.595 clat percentiles (msec): 00:15:35.595 | 1.00th=[ 55], 5.00th=[ 3876], 10.00th=[ 3943], 20.00th=[ 4077], 00:15:35.595 | 30.00th=[ 4178], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[ 9866], 00:15:35.595 | 70.00th=[10000], 80.00th=[10000], 90.00th=[10134], 95.00th=[10268], 00:15:35.595 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:15:35.595 | 99.99th=[10268] 00:15:35.595 lat (msec) : 100=1.39%, >=2000=98.61% 00:15:35.595 cpu : usr=0.00%, sys=0.35%, ctx=156, majf=0, minf=18433 00:15:35.595 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:15:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.595 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:35.595 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.595 job0: (groupid=0, jobs=1): err= 0: pid=3227747: Mon Dec 9 11:54:42 2024 00:15:35.595 read: IOPS=8, BW=8406KiB/s (8608kB/s)(86.0MiB/10476msec) 00:15:35.595 slat (usec): min=761, max=2109.2k, avg=121315.26, stdev=465623.97 00:15:35.595 clat (msec): min=41, max=10472, avg=9290.04, stdev=2530.25 00:15:35.595 lat (msec): min=2104, max=10474, avg=9411.36, stdev=2323.26 00:15:35.595 clat percentiles (msec): 00:15:35.595 | 1.00th=[ 42], 5.00th=[ 2106], 10.00th=[ 4329], 20.00th=[10134], 00:15:35.595 | 30.00th=[10134], 40.00th=[10268], 50.00th=[10268], 60.00th=[10268], 00:15:35.595 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:15:35.595 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:15:35.595 | 99.99th=[10537] 00:15:35.595 lat (msec) : 50=1.16%, >=2000=98.84% 00:15:35.595 cpu : usr=0.01%, sys=0.70%, ctx=132, majf=0, minf=22017 00:15:35.595 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.3%, 16=18.6%, 32=37.2%, >=64=26.7% 00:15:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.595 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:35.595 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.595 job0: (groupid=0, jobs=1): err= 0: pid=3227748: Mon Dec 9 11:54:42 2024 00:15:35.595 read: IOPS=3, BW=3154KiB/s (3230kB/s)(32.0MiB/10389msec) 00:15:35.595 slat (usec): min=402, max=2086.8k, avg=323330.29, stdev=725372.54 00:15:35.595 clat (msec): min=41, max=10387, avg=5971.64, stdev=3173.76 00:15:35.595 lat (msec): min=2014, max=10388, avg=6294.97, stdev=3075.66 00:15:35.595 clat percentiles (msec): 00:15:35.595 | 1.00th=[ 42], 5.00th=[ 2022], 10.00th=[ 2106], 20.00th=[ 2106], 00:15:35.595 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:15:35.595 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10402], 95.00th=[10402], 00:15:35.595 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.595 | 99.99th=[10402] 00:15:35.595 lat (msec) : 50=3.12%, >=2000=96.88% 00:15:35.595 cpu : usr=0.00%, sys=0.17%, ctx=80, majf=0, minf=8193 00:15:35.595 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:15:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.595 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:35.595 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.595 job0: (groupid=0, jobs=1): err= 0: pid=3227749: Mon Dec 9 11:54:42 2024 00:15:35.595 read: IOPS=8, BW=8605KiB/s (8812kB/s)(88.0MiB/10472msec) 00:15:35.595 slat (usec): min=544, max=2076.8k, avg=118454.35, stdev=441860.46 00:15:35.595 clat (msec): min=47, max=10469, avg=7942.55, stdev=3484.74 00:15:35.595 lat (msec): min=1906, max=10471, avg=8061.01, stdev=3389.13 00:15:35.595 clat percentiles (msec): 00:15:35.595 | 1.00th=[ 47], 5.00th=[ 1921], 10.00th=[ 2005], 20.00th=[ 2106], 00:15:35.595 | 30.00th=[ 6409], 40.00th=[10134], 50.00th=[10134], 60.00th=[10402], 00:15:35.595 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:15:35.595 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:15:35.595 | 99.99th=[10537] 00:15:35.595 lat (msec) : 50=1.14%, 2000=6.82%, >=2000=92.05% 00:15:35.595 cpu : usr=0.00%, sys=0.59%, ctx=182, majf=0, minf=22529 00:15:35.595 IO depths : 1=1.1%, 2=2.3%, 4=4.5%, 8=9.1%, 16=18.2%, 32=36.4%, >=64=28.4% 00:15:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.595 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:35.595 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.595 job0: (groupid=0, jobs=1): err= 0: pid=3227750: Mon Dec 9 11:54:42 2024 00:15:35.595 read: IOPS=35, BW=35.8MiB/s (37.5MB/s)(374MiB/10452msec) 00:15:35.595 slat (usec): min=38, max=2143.9k, avg=27815.13, stdev=206145.61 00:15:35.595 clat (msec): min=46, max=6806, avg=2043.08, stdev=1797.95 00:15:35.595 lat (msec): min=734, max=8580, avg=2070.90, stdev=1831.54 00:15:35.595 clat percentiles (msec): 00:15:35.595 | 1.00th=[ 735], 5.00th=[ 743], 10.00th=[ 743], 20.00th=[ 785], 00:15:35.595 | 30.00th=[ 818], 40.00th=[ 852], 50.00th=[ 860], 60.00th=[ 2333], 00:15:35.595 | 70.00th=[ 2601], 80.00th=[ 2802], 90.00th=[ 5067], 95.00th=[ 6745], 00:15:35.596 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:15:35.596 | 99.99th=[ 6812] 00:15:35.596 bw ( KiB/s): min=75776, max=159744, per=3.39%, avg=125952.00, stdev=40427.47, samples=4 00:15:35.596 iops : min= 74, max= 156, avg=123.00, stdev=39.48, samples=4 00:15:35.596 lat (msec) : 50=0.27%, 750=12.57%, 1000=42.51%, >=2000=44.65% 00:15:35.596 cpu : usr=0.01%, sys=1.19%, ctx=340, majf=0, minf=32769 00:15:35.596 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.6%, >=64=83.2% 00:15:35.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.596 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:35.596 issued rwts: total=374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.596 job0: (groupid=0, jobs=1): err= 0: pid=3227751: Mon Dec 9 11:54:42 2024 00:15:35.596 read: IOPS=73, BW=73.0MiB/s (76.6MB/s)(763MiB/10449msec) 00:15:35.596 slat (usec): min=38, max=2061.9k, avg=13617.50, stdev=142068.49 00:15:35.596 clat (msec): min=55, max=10273, avg=1507.33, stdev=2220.20 00:15:35.596 lat (msec): min=365, max=10294, avg=1520.95, stdev=2234.53 00:15:35.596 clat percentiles (msec): 00:15:35.596 | 1.00th=[ 368], 5.00th=[ 368], 10.00th=[ 368], 20.00th=[ 372], 00:15:35.596 | 30.00th=[ 376], 40.00th=[ 376], 50.00th=[ 376], 60.00th=[ 376], 00:15:35.596 | 70.00th=[ 405], 80.00th=[ 2165], 90.00th=[ 6544], 95.00th=[ 6611], 00:15:35.596 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[10268], 99.95th=[10268], 00:15:35.596 | 99.99th=[10268] 00:15:35.596 bw ( KiB/s): min=14336, max=348160, per=5.01%, avg=185782.86, stdev=159970.15, samples=7 00:15:35.596 iops : min= 14, max= 340, avg=181.43, stdev=156.22, samples=7 00:15:35.596 lat (msec) : 100=0.13%, 500=75.23%, >=2000=24.64% 00:15:35.596 cpu : usr=0.03%, sys=1.46%, ctx=681, majf=0, minf=32769 00:15:35.596 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.7% 00:15:35.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.596 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.596 issued rwts: total=763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.596 job1: (groupid=0, jobs=1): err= 0: pid=3227752: Mon Dec 9 11:54:42 2024 00:15:35.596 read: IOPS=104, BW=104MiB/s (110MB/s)(1093MiB/10460msec) 00:15:35.596 slat (usec): min=39, max=2111.4k, avg=9521.31, stdev=64841.88 00:15:35.596 clat (msec): min=47, max=3706, avg=1176.15, stdev=738.16 00:15:35.596 lat (msec): min=392, max=3707, avg=1185.67, stdev=740.29 00:15:35.596 clat percentiles (msec): 00:15:35.596 | 1.00th=[ 393], 5.00th=[ 401], 10.00th=[ 422], 20.00th=[ 558], 00:15:35.596 | 30.00th=[ 776], 40.00th=[ 852], 50.00th=[ 919], 60.00th=[ 944], 00:15:35.596 | 70.00th=[ 1435], 80.00th=[ 1838], 90.00th=[ 2265], 95.00th=[ 2769], 00:15:35.596 | 99.00th=[ 3473], 99.50th=[ 3608], 99.90th=[ 3708], 99.95th=[ 3708], 00:15:35.596 | 99.99th=[ 3708] 00:15:35.596 bw ( KiB/s): min=32768, max=294912, per=3.33%, avg=123520.00, stdev=82720.67, samples=16 00:15:35.596 iops : min= 32, max= 288, avg=120.63, stdev=80.78, samples=16 00:15:35.596 lat (msec) : 50=0.09%, 500=15.92%, 750=12.72%, 1000=35.59%, 2000=24.06% 00:15:35.596 lat (msec) : >=2000=11.62% 00:15:35.596 cpu : usr=0.05%, sys=1.85%, ctx=1428, majf=0, minf=32769 00:15:35.596 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:15:35.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.596 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.596 issued rwts: total=1093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.596 job1: (groupid=0, jobs=1): err= 0: pid=3227753: Mon Dec 9 11:54:42 2024 00:15:35.596 read: IOPS=4, BW=4806KiB/s (4921kB/s)(49.0MiB/10440msec) 00:15:35.596 slat (usec): min=785, max=2079.6k, avg=211504.30, stdev=602564.31 00:15:35.596 clat (msec): min=75, max=10433, avg=7874.85, stdev=3074.52 00:15:35.596 lat (msec): min=2117, max=10439, avg=8086.36, stdev=2876.92 00:15:35.596 clat percentiles (msec): 00:15:35.596 | 1.00th=[ 75], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4329], 00:15:35.596 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10268], 00:15:35.596 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:15:35.596 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.596 | 99.99th=[10402] 00:15:35.596 lat (msec) : 100=2.04%, >=2000=97.96% 00:15:35.596 cpu : usr=0.00%, sys=0.44%, ctx=94, majf=0, minf=12545 00:15:35.596 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:15:35.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.596 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:35.596 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.596 job1: (groupid=0, jobs=1): err= 0: pid=3227754: Mon Dec 9 11:54:42 2024 00:15:35.596 read: IOPS=98, BW=98.3MiB/s (103MB/s)(1026MiB/10434msec) 00:15:35.596 slat (usec): min=31, max=1832.9k, avg=10111.12, stdev=63467.06 00:15:35.596 clat (msec): min=55, max=2691, avg=1239.73, stdev=623.58 00:15:35.596 lat (msec): min=414, max=2692, avg=1249.84, stdev=623.99 00:15:35.596 clat percentiles (msec): 00:15:35.596 | 1.00th=[ 451], 5.00th=[ 542], 10.00th=[ 617], 20.00th=[ 701], 00:15:35.596 | 30.00th=[ 785], 40.00th=[ 802], 50.00th=[ 885], 60.00th=[ 1351], 00:15:35.596 | 70.00th=[ 1620], 80.00th=[ 1871], 90.00th=[ 2198], 95.00th=[ 2467], 00:15:35.596 | 99.00th=[ 2668], 99.50th=[ 2702], 99.90th=[ 2702], 99.95th=[ 2702], 00:15:35.596 | 99.99th=[ 2702] 00:15:35.596 bw ( KiB/s): min= 4096, max=253952, per=2.91%, avg=108182.59, stdev=70262.50, samples=17 00:15:35.596 iops : min= 4, max= 248, avg=105.65, stdev=68.62, samples=17 00:15:35.596 lat (msec) : 100=0.10%, 500=2.83%, 750=23.59%, 1000=24.76%, 2000=35.09% 00:15:35.596 lat (msec) : >=2000=13.65% 00:15:35.596 cpu : usr=0.01%, sys=1.70%, ctx=1597, majf=0, minf=32769 00:15:35.596 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:15:35.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.596 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.596 issued rwts: total=1026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.596 job1: (groupid=0, jobs=1): err= 0: pid=3227755: Mon Dec 9 11:54:42 2024 00:15:35.596 read: IOPS=5, BW=5301KiB/s (5428kB/s)(54.0MiB/10431msec) 00:15:35.596 slat (usec): min=686, max=2092.5k, avg=191764.76, stdev=581275.59 00:15:35.596 clat (msec): min=75, max=10428, avg=8214.89, stdev=3174.89 00:15:35.596 lat (msec): min=2130, max=10430, avg=8406.65, stdev=2980.79 00:15:35.596 clat percentiles (msec): 00:15:35.596 | 1.00th=[ 75], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:15:35.596 | 30.00th=[ 6477], 40.00th=[10268], 50.00th=[10268], 60.00th=[10402], 00:15:35.596 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:15:35.596 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.596 | 99.99th=[10402] 00:15:35.596 lat (msec) : 100=1.85%, >=2000=98.15% 00:15:35.596 cpu : usr=0.01%, sys=0.49%, ctx=94, majf=0, minf=13825 00:15:35.596 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:15:35.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.596 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:35.596 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.596 job1: (groupid=0, jobs=1): err= 0: pid=3227756: Mon Dec 9 11:54:42 2024 00:15:35.596 read: IOPS=66, BW=66.0MiB/s (69.3MB/s)(686MiB/10386msec) 00:15:35.596 slat (usec): min=49, max=2027.4k, avg=15043.57, stdev=79218.61 00:15:35.596 clat (msec): min=62, max=3592, avg=1780.04, stdev=635.95 00:15:35.596 lat (msec): min=814, max=3655, avg=1795.09, stdev=633.86 00:15:35.596 clat percentiles (msec): 00:15:35.596 | 1.00th=[ 818], 5.00th=[ 902], 10.00th=[ 953], 20.00th=[ 1284], 00:15:35.596 | 30.00th=[ 1452], 40.00th=[ 1552], 50.00th=[ 1703], 60.00th=[ 1838], 00:15:35.596 | 70.00th=[ 2022], 80.00th=[ 2232], 90.00th=[ 2702], 95.00th=[ 2970], 00:15:35.596 | 99.00th=[ 3540], 99.50th=[ 3540], 99.90th=[ 3608], 99.95th=[ 3608], 00:15:35.596 | 99.99th=[ 3608] 00:15:35.596 bw ( KiB/s): min=18432, max=169984, per=2.20%, avg=81627.43, stdev=46666.04, samples=14 00:15:35.596 iops : min= 18, max= 166, avg=79.71, stdev=45.57, samples=14 00:15:35.596 lat (msec) : 100=0.15%, 1000=14.87%, 2000=54.08%, >=2000=30.90% 00:15:35.596 cpu : usr=0.05%, sys=1.06%, ctx=2021, majf=0, minf=32769 00:15:35.596 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:15:35.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.596 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.596 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.596 job1: (groupid=0, jobs=1): err= 0: pid=3227757: Mon Dec 9 11:54:42 2024 00:15:35.596 read: IOPS=37, BW=37.4MiB/s (39.2MB/s)(391MiB/10456msec) 00:15:35.596 slat (usec): min=467, max=2048.8k, avg=26595.44, stdev=159898.38 00:15:35.596 clat (msec): min=55, max=5694, avg=2427.36, stdev=1257.84 00:15:35.596 lat (msec): min=741, max=5703, avg=2453.96, stdev=1260.38 00:15:35.596 clat percentiles (msec): 00:15:35.596 | 1.00th=[ 785], 5.00th=[ 860], 10.00th=[ 1036], 20.00th=[ 1217], 00:15:35.596 | 30.00th=[ 1905], 40.00th=[ 2123], 50.00th=[ 2232], 60.00th=[ 2299], 00:15:35.596 | 70.00th=[ 2769], 80.00th=[ 3004], 90.00th=[ 5201], 95.00th=[ 5336], 00:15:35.596 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:15:35.596 | 99.99th=[ 5671] 00:15:35.596 bw ( KiB/s): min= 8192, max=163840, per=1.81%, avg=67315.12, stdev=50043.99, samples=8 00:15:35.596 iops : min= 8, max= 160, avg=65.62, stdev=48.91, samples=8 00:15:35.596 lat (msec) : 100=0.26%, 750=0.26%, 1000=7.93%, 2000=25.32%, >=2000=66.24% 00:15:35.596 cpu : usr=0.05%, sys=1.06%, ctx=1237, majf=0, minf=32769 00:15:35.596 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:15:35.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.596 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:35.596 issued rwts: total=391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.596 job1: (groupid=0, jobs=1): err= 0: pid=3227758: Mon Dec 9 11:54:42 2024 00:15:35.596 read: IOPS=109, BW=110MiB/s (115MB/s)(1100MiB/10042msec) 00:15:35.596 slat (usec): min=38, max=1538.8k, avg=9093.47, stdev=48194.41 00:15:35.596 clat (msec): min=34, max=5769, avg=930.00, stdev=611.85 00:15:35.596 lat (msec): min=54, max=5782, avg=939.09, stdev=624.51 00:15:35.596 clat percentiles (msec): 00:15:35.596 | 1.00th=[ 84], 5.00th=[ 226], 10.00th=[ 305], 20.00th=[ 531], 00:15:35.597 | 30.00th=[ 550], 40.00th=[ 684], 50.00th=[ 776], 60.00th=[ 885], 00:15:35.597 | 70.00th=[ 969], 80.00th=[ 1368], 90.00th=[ 1938], 95.00th=[ 2140], 00:15:35.597 | 99.00th=[ 2198], 99.50th=[ 2232], 99.90th=[ 5738], 99.95th=[ 5738], 00:15:35.597 | 99.99th=[ 5738] 00:15:35.597 bw ( KiB/s): min=40960, max=247808, per=3.36%, avg=124544.00, stdev=70240.60, samples=16 00:15:35.597 iops : min= 40, max= 242, avg=121.63, stdev=68.59, samples=16 00:15:35.597 lat (msec) : 50=0.09%, 100=1.45%, 250=6.82%, 500=7.18%, 750=30.64% 00:15:35.597 lat (msec) : 1000=26.73%, 2000=18.45%, >=2000=8.64% 00:15:35.597 cpu : usr=0.03%, sys=1.28%, ctx=2695, majf=0, minf=32769 00:15:35.597 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:15:35.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.597 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.597 issued rwts: total=1100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.597 job1: (groupid=0, jobs=1): err= 0: pid=3227759: Mon Dec 9 11:54:42 2024 00:15:35.597 read: IOPS=4, BW=4436KiB/s (4543kB/s)(45.0MiB/10387msec) 00:15:35.597 slat (usec): min=760, max=2092.5k, avg=229407.62, stdev=622499.84 00:15:35.597 clat (msec): min=63, max=10382, avg=5555.61, stdev=3393.49 00:15:35.597 lat (msec): min=2086, max=10386, avg=5785.02, stdev=3362.56 00:15:35.597 clat percentiles (msec): 00:15:35.597 | 1.00th=[ 64], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2123], 00:15:35.597 | 30.00th=[ 2165], 40.00th=[ 4245], 50.00th=[ 4279], 60.00th=[ 6409], 00:15:35.597 | 70.00th=[ 8557], 80.00th=[10134], 90.00th=[10402], 95.00th=[10402], 00:15:35.597 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.597 | 99.99th=[10402] 00:15:35.597 lat (msec) : 100=2.22%, >=2000=97.78% 00:15:35.597 cpu : usr=0.00%, sys=0.35%, ctx=85, majf=0, minf=11521 00:15:35.597 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:15:35.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.597 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:35.597 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.597 job1: (groupid=0, jobs=1): err= 0: pid=3227760: Mon Dec 9 11:54:42 2024 00:15:35.597 read: IOPS=35, BW=36.0MiB/s (37.7MB/s)(371MiB/10315msec) 00:15:35.597 slat (usec): min=38, max=2128.1k, avg=26951.85, stdev=204514.67 00:15:35.597 clat (msec): min=313, max=8939, avg=971.16, stdev=1588.53 00:15:35.597 lat (msec): min=315, max=8950, avg=998.11, stdev=1646.04 00:15:35.597 clat percentiles (msec): 00:15:35.597 | 1.00th=[ 321], 5.00th=[ 380], 10.00th=[ 489], 20.00th=[ 523], 00:15:35.597 | 30.00th=[ 523], 40.00th=[ 542], 50.00th=[ 567], 60.00th=[ 600], 00:15:35.597 | 70.00th=[ 634], 80.00th=[ 693], 90.00th=[ 818], 95.00th=[ 5067], 00:15:35.597 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:15:35.597 | 99.99th=[ 8926] 00:15:35.597 bw ( KiB/s): min=18432, max=249856, per=4.43%, avg=164469.33, stdev=127075.35, samples=3 00:15:35.597 iops : min= 18, max= 244, avg=160.33, stdev=123.90, samples=3 00:15:35.597 lat (msec) : 500=12.13%, 750=71.43%, 1000=9.43%, >=2000=7.01% 00:15:35.597 cpu : usr=0.02%, sys=1.01%, ctx=369, majf=0, minf=32769 00:15:35.597 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.0% 00:15:35.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.597 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:35.597 issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.597 job1: (groupid=0, jobs=1): err= 0: pid=3227761: Mon Dec 9 11:54:42 2024 00:15:35.597 read: IOPS=89, BW=89.4MiB/s (93.7MB/s)(920MiB/10293msec) 00:15:35.597 slat (usec): min=34, max=1824.6k, avg=11181.14, stdev=61463.95 00:15:35.597 clat (msec): min=2, max=3501, avg=1299.91, stdev=749.13 00:15:35.597 lat (msec): min=459, max=3505, avg=1311.09, stdev=749.97 00:15:35.597 clat percentiles (msec): 00:15:35.597 | 1.00th=[ 468], 5.00th=[ 558], 10.00th=[ 592], 20.00th=[ 667], 00:15:35.597 | 30.00th=[ 726], 40.00th=[ 902], 50.00th=[ 1167], 60.00th=[ 1284], 00:15:35.597 | 70.00th=[ 1469], 80.00th=[ 1787], 90.00th=[ 2433], 95.00th=[ 3205], 00:15:35.597 | 99.00th=[ 3440], 99.50th=[ 3473], 99.90th=[ 3507], 99.95th=[ 3507], 00:15:35.597 | 99.99th=[ 3507] 00:15:35.597 bw ( KiB/s): min= 2048, max=204800, per=2.91%, avg=108134.40, stdev=65408.79, samples=15 00:15:35.597 iops : min= 2, max= 200, avg=105.60, stdev=63.88, samples=15 00:15:35.597 lat (msec) : 4=0.11%, 500=1.63%, 750=31.20%, 1000=9.35%, 2000=45.11% 00:15:35.597 lat (msec) : >=2000=12.61% 00:15:35.597 cpu : usr=0.01%, sys=1.26%, ctx=2793, majf=0, minf=32769 00:15:35.597 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:15:35.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.597 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.597 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.597 job1: (groupid=0, jobs=1): err= 0: pid=3227762: Mon Dec 9 11:54:42 2024 00:15:35.597 read: IOPS=145, BW=146MiB/s (153MB/s)(1525MiB/10462msec) 00:15:35.597 slat (usec): min=39, max=2138.7k, avg=6825.92, stdev=55163.81 00:15:35.597 clat (msec): min=46, max=3743, avg=829.25, stdev=751.71 00:15:35.597 lat (msec): min=191, max=3749, avg=836.07, stdev=755.38 00:15:35.597 clat percentiles (msec): 00:15:35.597 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 218], 20.00th=[ 239], 00:15:35.597 | 30.00th=[ 334], 40.00th=[ 397], 50.00th=[ 472], 60.00th=[ 860], 00:15:35.597 | 70.00th=[ 986], 80.00th=[ 1234], 90.00th=[ 1586], 95.00th=[ 2567], 00:15:35.597 | 99.00th=[ 3608], 99.50th=[ 3675], 99.90th=[ 3742], 99.95th=[ 3742], 00:15:35.597 | 99.99th=[ 3742] 00:15:35.597 bw ( KiB/s): min= 4096, max=577536, per=4.82%, avg=178816.00, stdev=154178.21, samples=16 00:15:35.597 iops : min= 4, max= 564, avg=174.62, stdev=150.56, samples=16 00:15:35.597 lat (msec) : 50=0.07%, 250=22.95%, 500=27.67%, 750=2.89%, 1000=17.97% 00:15:35.597 lat (msec) : 2000=20.13%, >=2000=8.33% 00:15:35.597 cpu : usr=0.06%, sys=2.11%, ctx=4386, majf=0, minf=32769 00:15:35.597 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:15:35.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.597 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.597 issued rwts: total=1525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.597 job1: (groupid=0, jobs=1): err= 0: pid=3227763: Mon Dec 9 11:54:42 2024 00:15:35.597 read: IOPS=71, BW=71.9MiB/s (75.4MB/s)(748MiB/10402msec) 00:15:35.597 slat (usec): min=332, max=2058.0k, avg=13838.79, stdev=76434.40 00:15:35.597 clat (msec): min=47, max=3422, avg=1633.53, stdev=699.93 00:15:35.597 lat (msec): min=668, max=3425, avg=1647.36, stdev=700.12 00:15:35.597 clat percentiles (msec): 00:15:35.597 | 1.00th=[ 718], 5.00th=[ 785], 10.00th=[ 810], 20.00th=[ 860], 00:15:35.597 | 30.00th=[ 1053], 40.00th=[ 1318], 50.00th=[ 1418], 60.00th=[ 1955], 00:15:35.597 | 70.00th=[ 2232], 80.00th=[ 2333], 90.00th=[ 2500], 95.00th=[ 2668], 00:15:35.597 | 99.00th=[ 3272], 99.50th=[ 3339], 99.90th=[ 3440], 99.95th=[ 3440], 00:15:35.597 | 99.99th=[ 3440] 00:15:35.597 bw ( KiB/s): min= 6144, max=188416, per=2.28%, avg=84650.67, stdev=52481.04, samples=15 00:15:35.597 iops : min= 6, max= 184, avg=82.67, stdev=51.25, samples=15 00:15:35.597 lat (msec) : 50=0.13%, 750=3.07%, 1000=26.34%, 2000=31.28%, >=2000=39.17% 00:15:35.597 cpu : usr=0.06%, sys=1.19%, ctx=2437, majf=0, minf=32769 00:15:35.597 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:15:35.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.597 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.597 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.597 job1: (groupid=0, jobs=1): err= 0: pid=3227764: Mon Dec 9 11:54:42 2024 00:15:35.597 read: IOPS=79, BW=79.5MiB/s (83.4MB/s)(824MiB/10362msec) 00:15:35.597 slat (usec): min=505, max=1823.0k, avg=12497.01, stdev=65339.34 00:15:35.597 clat (msec): min=61, max=4019, avg=1447.48, stdev=1028.08 00:15:35.597 lat (msec): min=298, max=4029, avg=1459.98, stdev=1031.89 00:15:35.597 clat percentiles (msec): 00:15:35.597 | 1.00th=[ 300], 5.00th=[ 321], 10.00th=[ 334], 20.00th=[ 351], 00:15:35.597 | 30.00th=[ 368], 40.00th=[ 894], 50.00th=[ 1519], 60.00th=[ 1854], 00:15:35.597 | 70.00th=[ 2123], 80.00th=[ 2198], 90.00th=[ 2702], 95.00th=[ 3608], 00:15:35.597 | 99.00th=[ 3910], 99.50th=[ 3977], 99.90th=[ 4010], 99.95th=[ 4010], 00:15:35.597 | 99.99th=[ 4010] 00:15:35.597 bw ( KiB/s): min=30720, max=374784, per=2.95%, avg=109646.77, stdev=106688.21, samples=13 00:15:35.597 iops : min= 30, max= 366, avg=107.08, stdev=104.19, samples=13 00:15:35.597 lat (msec) : 100=0.12%, 500=35.19%, 750=3.52%, 1000=2.31%, 2000=23.91% 00:15:35.597 lat (msec) : >=2000=34.95% 00:15:35.597 cpu : usr=0.03%, sys=1.22%, ctx=2579, majf=0, minf=32769 00:15:35.597 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:15:35.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.597 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.597 issued rwts: total=824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.597 job2: (groupid=0, jobs=1): err= 0: pid=3227765: Mon Dec 9 11:54:42 2024 00:15:35.597 read: IOPS=51, BW=51.2MiB/s (53.7MB/s)(525MiB/10248msec) 00:15:35.597 slat (usec): min=42, max=2072.3k, avg=19090.94, stdev=112764.51 00:15:35.597 clat (msec): min=222, max=5521, avg=2242.09, stdev=1771.09 00:15:35.597 lat (msec): min=259, max=5542, avg=2261.18, stdev=1778.06 00:15:35.597 clat percentiles (msec): 00:15:35.597 | 1.00th=[ 264], 5.00th=[ 330], 10.00th=[ 527], 20.00th=[ 1053], 00:15:35.597 | 30.00th=[ 1250], 40.00th=[ 1552], 50.00th=[ 1603], 60.00th=[ 1636], 00:15:35.597 | 70.00th=[ 1703], 80.00th=[ 5134], 90.00th=[ 5403], 95.00th=[ 5403], 00:15:35.597 | 99.00th=[ 5470], 99.50th=[ 5470], 99.90th=[ 5537], 99.95th=[ 5537], 00:15:35.597 | 99.99th=[ 5537] 00:15:35.597 bw ( KiB/s): min=10240, max=113432, per=1.91%, avg=70821.09, stdev=32912.85, samples=11 00:15:35.597 iops : min= 10, max= 110, avg=69.09, stdev=32.04, samples=11 00:15:35.597 lat (msec) : 250=0.19%, 500=9.14%, 750=4.57%, 1000=4.38%, 2000=56.19% 00:15:35.597 lat (msec) : >=2000=25.52% 00:15:35.597 cpu : usr=0.01%, sys=1.01%, ctx=1497, majf=0, minf=32769 00:15:35.597 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:15:35.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.597 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:35.597 issued rwts: total=525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.598 job2: (groupid=0, jobs=1): err= 0: pid=3227766: Mon Dec 9 11:54:42 2024 00:15:35.598 read: IOPS=2, BW=2750KiB/s (2816kB/s)(28.0MiB/10428msec) 00:15:35.598 slat (usec): min=741, max=2140.1k, avg=369678.32, stdev=782784.65 00:15:35.598 clat (msec): min=76, max=10424, avg=6390.62, stdev=3876.55 00:15:35.598 lat (msec): min=2120, max=10427, avg=6760.30, stdev=3743.37 00:15:35.598 clat percentiles (msec): 00:15:35.598 | 1.00th=[ 77], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2140], 00:15:35.598 | 30.00th=[ 2165], 40.00th=[ 4329], 50.00th=[ 8557], 60.00th=[ 8658], 00:15:35.598 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:15:35.598 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.598 | 99.99th=[10402] 00:15:35.598 lat (msec) : 100=3.57%, >=2000=96.43% 00:15:35.598 cpu : usr=0.01%, sys=0.24%, ctx=89, majf=0, minf=7169 00:15:35.598 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:15:35.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.598 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:35.598 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.598 job2: (groupid=0, jobs=1): err= 0: pid=3227767: Mon Dec 9 11:54:42 2024 00:15:35.598 read: IOPS=46, BW=46.3MiB/s (48.5MB/s)(479MiB/10350msec) 00:15:35.598 slat (usec): min=50, max=2156.1k, avg=21496.05, stdev=136362.75 00:15:35.598 clat (msec): min=50, max=6231, avg=2578.27, stdev=1724.09 00:15:35.598 lat (msec): min=1083, max=6270, avg=2599.77, stdev=1723.58 00:15:35.598 clat percentiles (msec): 00:15:35.598 | 1.00th=[ 1083], 5.00th=[ 1150], 10.00th=[ 1183], 20.00th=[ 1536], 00:15:35.598 | 30.00th=[ 1620], 40.00th=[ 1653], 50.00th=[ 1687], 60.00th=[ 1720], 00:15:35.598 | 70.00th=[ 1787], 80.00th=[ 5067], 90.00th=[ 5604], 95.00th=[ 5873], 00:15:35.598 | 99.00th=[ 6141], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:15:35.598 | 99.99th=[ 6208] 00:15:35.598 bw ( KiB/s): min= 2048, max=102400, per=1.61%, avg=59904.00, stdev=33231.63, samples=12 00:15:35.598 iops : min= 2, max= 100, avg=58.50, stdev=32.45, samples=12 00:15:35.598 lat (msec) : 100=0.21%, 2000=72.86%, >=2000=26.93% 00:15:35.598 cpu : usr=0.02%, sys=0.90%, ctx=1460, majf=0, minf=32769 00:15:35.598 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.7%, >=64=86.8% 00:15:35.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.598 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:35.598 issued rwts: total=479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.598 job2: (groupid=0, jobs=1): err= 0: pid=3227768: Mon Dec 9 11:54:42 2024 00:15:35.598 read: IOPS=60, BW=60.3MiB/s (63.3MB/s)(630MiB/10444msec) 00:15:35.598 slat (usec): min=41, max=2105.8k, avg=16473.67, stdev=116693.48 00:15:35.598 clat (msec): min=61, max=6006, avg=2012.23, stdev=1616.33 00:15:35.598 lat (msec): min=649, max=6022, avg=2028.71, stdev=1620.01 00:15:35.598 clat percentiles (msec): 00:15:35.598 | 1.00th=[ 651], 5.00th=[ 693], 10.00th=[ 726], 20.00th=[ 793], 00:15:35.598 | 30.00th=[ 978], 40.00th=[ 1150], 50.00th=[ 1284], 60.00th=[ 1770], 00:15:35.598 | 70.00th=[ 1905], 80.00th=[ 3943], 90.00th=[ 5067], 95.00th=[ 5604], 00:15:35.598 | 99.00th=[ 5940], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:15:35.598 | 99.99th=[ 6007] 00:15:35.598 bw ( KiB/s): min=18432, max=190464, per=2.31%, avg=85674.67, stdev=51008.50, samples=12 00:15:35.598 iops : min= 18, max= 186, avg=83.67, stdev=49.81, samples=12 00:15:35.598 lat (msec) : 100=0.16%, 750=13.17%, 1000=19.37%, 2000=43.81%, >=2000=23.49% 00:15:35.598 cpu : usr=0.00%, sys=1.48%, ctx=1378, majf=0, minf=32769 00:15:35.598 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:15:35.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.598 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.598 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.598 job2: (groupid=0, jobs=1): err= 0: pid=3227769: Mon Dec 9 11:54:42 2024 00:15:35.598 read: IOPS=42, BW=42.8MiB/s (44.9MB/s)(443MiB/10341msec) 00:15:35.598 slat (usec): min=371, max=2023.9k, avg=23215.96, stdev=180004.37 00:15:35.598 clat (msec): min=54, max=10218, avg=2387.17, stdev=2414.75 00:15:35.598 lat (msec): min=583, max=10243, avg=2410.39, stdev=2430.24 00:15:35.598 clat percentiles (msec): 00:15:35.598 | 1.00th=[ 584], 5.00th=[ 600], 10.00th=[ 609], 20.00th=[ 625], 00:15:35.598 | 30.00th=[ 651], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[ 827], 00:15:35.598 | 70.00th=[ 2836], 80.00th=[ 4329], 90.00th=[ 6611], 95.00th=[ 6745], 00:15:35.598 | 99.00th=[ 8557], 99.50th=[ 8658], 99.90th=[10268], 99.95th=[10268], 00:15:35.598 | 99.99th=[10268] 00:15:35.598 bw ( KiB/s): min= 8192, max=200704, per=2.90%, avg=107511.00, stdev=83016.45, samples=6 00:15:35.598 iops : min= 8, max= 196, avg=104.83, stdev=81.26, samples=6 00:15:35.598 lat (msec) : 100=0.23%, 750=54.63%, 1000=5.19%, >=2000=39.95% 00:15:35.598 cpu : usr=0.00%, sys=0.87%, ctx=839, majf=0, minf=32769 00:15:35.598 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:15:35.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.598 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:35.598 issued rwts: total=443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.598 job2: (groupid=0, jobs=1): err= 0: pid=3227770: Mon Dec 9 11:54:42 2024 00:15:35.598 read: IOPS=58, BW=58.2MiB/s (61.0MB/s)(600MiB/10306msec) 00:15:35.598 slat (usec): min=33, max=1800.2k, avg=17070.49, stdev=99949.88 00:15:35.598 clat (msec): min=60, max=5183, avg=2063.95, stdev=1266.03 00:15:35.598 lat (msec): min=917, max=5191, avg=2081.02, stdev=1265.81 00:15:35.598 clat percentiles (msec): 00:15:35.598 | 1.00th=[ 927], 5.00th=[ 1045], 10.00th=[ 1150], 20.00th=[ 1351], 00:15:35.598 | 30.00th=[ 1401], 40.00th=[ 1435], 50.00th=[ 1519], 60.00th=[ 1569], 00:15:35.598 | 70.00th=[ 1620], 80.00th=[ 3306], 90.00th=[ 4530], 95.00th=[ 4799], 00:15:35.598 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:15:35.598 | 99.99th=[ 5201] 00:15:35.598 bw ( KiB/s): min= 4096, max=116736, per=2.00%, avg=74358.15, stdev=32071.36, samples=13 00:15:35.598 iops : min= 4, max= 114, avg=72.62, stdev=31.32, samples=13 00:15:35.598 lat (msec) : 100=0.17%, 1000=1.50%, 2000=76.33%, >=2000=22.00% 00:15:35.598 cpu : usr=0.00%, sys=1.15%, ctx=1618, majf=0, minf=32769 00:15:35.598 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.5% 00:15:35.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.598 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.598 issued rwts: total=600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.598 job2: (groupid=0, jobs=1): err= 0: pid=3227771: Mon Dec 9 11:54:42 2024 00:15:35.598 read: IOPS=56, BW=56.0MiB/s (58.7MB/s)(584MiB/10424msec) 00:15:35.598 slat (usec): min=40, max=2030.6k, avg=17713.85, stdev=104808.61 00:15:35.598 clat (msec): min=76, max=5308, avg=2139.14, stdev=1312.05 00:15:35.598 lat (msec): min=943, max=5310, avg=2156.86, stdev=1311.69 00:15:35.598 clat percentiles (msec): 00:15:35.598 | 1.00th=[ 1003], 5.00th=[ 1045], 10.00th=[ 1133], 20.00th=[ 1250], 00:15:35.598 | 30.00th=[ 1401], 40.00th=[ 1502], 50.00th=[ 1603], 60.00th=[ 1670], 00:15:35.598 | 70.00th=[ 1770], 80.00th=[ 3708], 90.00th=[ 4597], 95.00th=[ 5000], 00:15:35.598 | 99.00th=[ 5201], 99.50th=[ 5269], 99.90th=[ 5336], 99.95th=[ 5336], 00:15:35.598 | 99.99th=[ 5336] 00:15:35.598 bw ( KiB/s): min=18432, max=116736, per=1.94%, avg=71837.54, stdev=32964.33, samples=13 00:15:35.598 iops : min= 18, max= 114, avg=70.15, stdev=32.19, samples=13 00:15:35.598 lat (msec) : 100=0.17%, 1000=0.68%, 2000=75.86%, >=2000=23.29% 00:15:35.598 cpu : usr=0.04%, sys=1.19%, ctx=1441, majf=0, minf=32769 00:15:35.598 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:15:35.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.598 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.598 issued rwts: total=584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.598 job2: (groupid=0, jobs=1): err= 0: pid=3227772: Mon Dec 9 11:54:42 2024 00:15:35.598 read: IOPS=81, BW=81.5MiB/s (85.5MB/s)(845MiB/10365msec) 00:15:35.598 slat (usec): min=42, max=2072.8k, avg=12211.14, stdev=98395.61 00:15:35.598 clat (msec): min=42, max=5980, avg=1498.20, stdev=1587.74 00:15:35.598 lat (msec): min=393, max=6000, avg=1510.41, stdev=1593.17 00:15:35.598 clat percentiles (msec): 00:15:35.598 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 397], 20.00th=[ 401], 00:15:35.598 | 30.00th=[ 414], 40.00th=[ 443], 50.00th=[ 1116], 60.00th=[ 1250], 00:15:35.598 | 70.00th=[ 1435], 80.00th=[ 1670], 90.00th=[ 4799], 95.00th=[ 5403], 00:15:35.598 | 99.00th=[ 5805], 99.50th=[ 5940], 99.90th=[ 6007], 99.95th=[ 6007], 00:15:35.598 | 99.99th=[ 6007] 00:15:35.598 bw ( KiB/s): min=14336, max=319488, per=3.04%, avg=112986.92, stdev=87674.74, samples=13 00:15:35.598 iops : min= 14, max= 312, avg=110.31, stdev=85.62, samples=13 00:15:35.598 lat (msec) : 50=0.12%, 500=41.18%, 750=3.79%, 1000=2.01%, 2000=37.28% 00:15:35.598 lat (msec) : >=2000=15.62% 00:15:35.598 cpu : usr=0.04%, sys=1.24%, ctx=1538, majf=0, minf=32769 00:15:35.598 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.5% 00:15:35.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.598 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.598 issued rwts: total=845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.598 job2: (groupid=0, jobs=1): err= 0: pid=3227773: Mon Dec 9 11:54:42 2024 00:15:35.598 read: IOPS=16, BW=16.7MiB/s (17.5MB/s)(174MiB/10399msec) 00:15:35.598 slat (usec): min=660, max=2136.3k, avg=59394.04, stdev=288611.80 00:15:35.598 clat (msec): min=62, max=9888, avg=6550.85, stdev=2328.01 00:15:35.598 lat (msec): min=2094, max=9893, avg=6610.24, stdev=2285.18 00:15:35.598 clat percentiles (msec): 00:15:35.598 | 1.00th=[ 2089], 5.00th=[ 2735], 10.00th=[ 2836], 20.00th=[ 2937], 00:15:35.598 | 30.00th=[ 6409], 40.00th=[ 7349], 50.00th=[ 7550], 60.00th=[ 7819], 00:15:35.598 | 70.00th=[ 8020], 80.00th=[ 8288], 90.00th=[ 8490], 95.00th=[ 9866], 00:15:35.598 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:15:35.598 | 99.99th=[ 9866] 00:15:35.598 bw ( KiB/s): min= 6144, max=32768, per=0.42%, avg=15701.33, stdev=13251.47, samples=6 00:15:35.598 iops : min= 6, max= 32, avg=15.33, stdev=12.94, samples=6 00:15:35.598 lat (msec) : 100=0.57%, >=2000=99.43% 00:15:35.598 cpu : usr=0.00%, sys=0.67%, ctx=556, majf=0, minf=32769 00:15:35.599 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.4%, >=64=63.8% 00:15:35.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.599 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:15:35.599 issued rwts: total=174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.599 job2: (groupid=0, jobs=1): err= 0: pid=3227774: Mon Dec 9 11:54:42 2024 00:15:35.599 read: IOPS=22, BW=22.1MiB/s (23.1MB/s)(230MiB/10426msec) 00:15:35.599 slat (usec): min=452, max=2117.5k, avg=45063.45, stdev=260110.14 00:15:35.599 clat (msec): min=60, max=9342, avg=1928.23, stdev=2560.63 00:15:35.599 lat (msec): min=436, max=9349, avg=1973.29, stdev=2607.07 00:15:35.599 clat percentiles (msec): 00:15:35.599 | 1.00th=[ 443], 5.00th=[ 506], 10.00th=[ 584], 20.00th=[ 768], 00:15:35.599 | 30.00th=[ 911], 40.00th=[ 961], 50.00th=[ 1011], 60.00th=[ 1062], 00:15:35.599 | 70.00th=[ 1150], 80.00th=[ 1234], 90.00th=[ 7550], 95.00th=[ 9329], 00:15:35.599 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:15:35.599 | 99.99th=[ 9329] 00:15:35.599 bw ( KiB/s): min=67584, max=141312, per=2.81%, avg=104448.00, stdev=52133.57, samples=2 00:15:35.599 iops : min= 66, max= 138, avg=102.00, stdev=50.91, samples=2 00:15:35.599 lat (msec) : 100=0.43%, 500=3.91%, 750=13.91%, 1000=30.00%, 2000=36.52% 00:15:35.599 lat (msec) : >=2000=15.22% 00:15:35.599 cpu : usr=0.00%, sys=0.86%, ctx=430, majf=0, minf=32769 00:15:35.599 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=13.9%, >=64=72.6% 00:15:35.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.599 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:15:35.599 issued rwts: total=230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.599 job2: (groupid=0, jobs=1): err= 0: pid=3227775: Mon Dec 9 11:54:42 2024 00:15:35.599 read: IOPS=44, BW=44.0MiB/s (46.2MB/s)(456MiB/10353msec) 00:15:35.599 slat (usec): min=89, max=2032.2k, avg=22643.09, stdev=108930.04 00:15:35.599 clat (msec): min=25, max=4827, avg=2701.97, stdev=896.92 00:15:35.599 lat (msec): min=1363, max=4853, avg=2724.61, stdev=888.78 00:15:35.599 clat percentiles (msec): 00:15:35.599 | 1.00th=[ 1385], 5.00th=[ 1469], 10.00th=[ 1670], 20.00th=[ 1921], 00:15:35.599 | 30.00th=[ 2165], 40.00th=[ 2433], 50.00th=[ 2567], 60.00th=[ 2702], 00:15:35.599 | 70.00th=[ 2802], 80.00th=[ 3473], 90.00th=[ 4279], 95.00th=[ 4530], 00:15:35.599 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:15:35.599 | 99.99th=[ 4799] 00:15:35.599 bw ( KiB/s): min=12288, max=88064, per=1.29%, avg=47964.71, stdev=19721.50, samples=14 00:15:35.599 iops : min= 12, max= 86, avg=46.71, stdev=19.19, samples=14 00:15:35.599 lat (msec) : 50=0.22%, 2000=23.90%, >=2000=75.88% 00:15:35.599 cpu : usr=0.00%, sys=1.15%, ctx=1697, majf=0, minf=32769 00:15:35.599 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.0%, >=64=86.2% 00:15:35.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.599 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:35.599 issued rwts: total=456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.599 job2: (groupid=0, jobs=1): err= 0: pid=3227776: Mon Dec 9 11:54:42 2024 00:15:35.599 read: IOPS=34, BW=34.1MiB/s (35.8MB/s)(353MiB/10341msec) 00:15:35.599 slat (usec): min=82, max=2105.3k, avg=29165.68, stdev=178313.38 00:15:35.599 clat (msec): min=42, max=8524, avg=3405.36, stdev=1256.62 00:15:35.599 lat (msec): min=1363, max=8557, avg=3434.52, stdev=1265.88 00:15:35.599 clat percentiles (msec): 00:15:35.599 | 1.00th=[ 1368], 5.00th=[ 1502], 10.00th=[ 1552], 20.00th=[ 1821], 00:15:35.599 | 30.00th=[ 3406], 40.00th=[ 3540], 50.00th=[ 3574], 60.00th=[ 3708], 00:15:35.599 | 70.00th=[ 3910], 80.00th=[ 4530], 90.00th=[ 5067], 95.00th=[ 5336], 00:15:35.599 | 99.00th=[ 5403], 99.50th=[ 6409], 99.90th=[ 8557], 99.95th=[ 8557], 00:15:35.599 | 99.99th=[ 8557] 00:15:35.599 bw ( KiB/s): min= 4096, max=90112, per=1.38%, avg=51200.00, stdev=30839.24, samples=9 00:15:35.599 iops : min= 4, max= 88, avg=50.00, stdev=30.12, samples=9 00:15:35.599 lat (msec) : 50=0.28%, 2000=26.63%, >=2000=73.09% 00:15:35.599 cpu : usr=0.01%, sys=0.83%, ctx=1006, majf=0, minf=32769 00:15:35.599 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.1%, >=64=82.2% 00:15:35.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.599 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:35.599 issued rwts: total=353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.599 job2: (groupid=0, jobs=1): err= 0: pid=3227777: Mon Dec 9 11:54:42 2024 00:15:35.599 read: IOPS=146, BW=147MiB/s (154MB/s)(1526MiB/10389msec) 00:15:35.599 slat (usec): min=35, max=1948.3k, avg=6759.11, stdev=62071.53 00:15:35.599 clat (msec): min=64, max=4408, avg=839.04, stdev=999.86 00:15:35.599 lat (msec): min=369, max=4411, avg=845.80, stdev=1002.95 00:15:35.599 clat percentiles (msec): 00:15:35.599 | 1.00th=[ 380], 5.00th=[ 380], 10.00th=[ 384], 20.00th=[ 384], 00:15:35.599 | 30.00th=[ 388], 40.00th=[ 397], 50.00th=[ 460], 60.00th=[ 531], 00:15:35.599 | 70.00th=[ 693], 80.00th=[ 877], 90.00th=[ 1003], 95.00th=[ 4144], 00:15:35.599 | 99.00th=[ 4329], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4396], 00:15:35.599 | 99.99th=[ 4396] 00:15:35.599 bw ( KiB/s): min=20480, max=344064, per=5.51%, avg=204465.64, stdev=111068.17, samples=14 00:15:35.599 iops : min= 20, max= 336, avg=199.64, stdev=108.44, samples=14 00:15:35.599 lat (msec) : 100=0.07%, 500=53.60%, 750=19.20%, 1000=16.84%, 2000=1.31% 00:15:35.599 lat (msec) : >=2000=8.98% 00:15:35.599 cpu : usr=0.15%, sys=2.30%, ctx=1912, majf=0, minf=32769 00:15:35.599 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:15:35.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.599 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.599 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.599 job3: (groupid=0, jobs=1): err= 0: pid=3227778: Mon Dec 9 11:54:42 2024 00:15:35.599 read: IOPS=2, BW=2773KiB/s (2840kB/s)(28.0MiB/10338msec) 00:15:35.599 slat (msec): min=4, max=2073, avg=366.28, stdev=763.52 00:15:35.599 clat (msec): min=81, max=10230, avg=5624.07, stdev=2894.89 00:15:35.599 lat (msec): min=2109, max=10337, avg=5990.35, stdev=2815.35 00:15:35.599 clat percentiles (msec): 00:15:35.599 | 1.00th=[ 82], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2165], 00:15:35.599 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:15:35.599 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[10268], 95.00th=[10268], 00:15:35.599 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:15:35.599 | 99.99th=[10268] 00:15:35.599 lat (msec) : 100=3.57%, >=2000=96.43% 00:15:35.599 cpu : usr=0.00%, sys=0.23%, ctx=63, majf=0, minf=7169 00:15:35.599 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:15:35.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.599 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:35.599 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.599 job3: (groupid=0, jobs=1): err= 0: pid=3227779: Mon Dec 9 11:54:42 2024 00:15:35.599 read: IOPS=46, BW=46.2MiB/s (48.4MB/s)(465MiB/10069msec) 00:15:35.599 slat (usec): min=36, max=2072.9k, avg=21520.89, stdev=128927.41 00:15:35.599 clat (msec): min=58, max=4943, avg=1902.60, stdev=1108.40 00:15:35.599 lat (msec): min=79, max=4959, avg=1924.12, stdev=1116.18 00:15:35.599 clat percentiles (msec): 00:15:35.599 | 1.00th=[ 165], 5.00th=[ 414], 10.00th=[ 751], 20.00th=[ 1083], 00:15:35.599 | 30.00th=[ 1250], 40.00th=[ 1418], 50.00th=[ 1620], 60.00th=[ 1955], 00:15:35.599 | 70.00th=[ 2089], 80.00th=[ 2937], 90.00th=[ 3171], 95.00th=[ 4799], 00:15:35.599 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:15:35.599 | 99.99th=[ 4933] 00:15:35.599 bw ( KiB/s): min=32768, max=159744, per=1.86%, avg=69095.60, stdev=36113.38, samples=10 00:15:35.599 iops : min= 32, max= 156, avg=67.40, stdev=35.26, samples=10 00:15:35.599 lat (msec) : 100=0.65%, 250=1.51%, 500=4.52%, 750=3.44%, 1000=7.74% 00:15:35.599 lat (msec) : 2000=44.52%, >=2000=37.63% 00:15:35.599 cpu : usr=0.04%, sys=1.18%, ctx=1068, majf=0, minf=32769 00:15:35.599 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:15:35.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.599 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:35.599 issued rwts: total=465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.599 job3: (groupid=0, jobs=1): err= 0: pid=3227780: Mon Dec 9 11:54:42 2024 00:15:35.599 read: IOPS=16, BW=17.0MiB/s (17.8MB/s)(171MiB/10064msec) 00:15:35.599 slat (usec): min=552, max=2100.6k, avg=58495.10, stdev=263991.50 00:15:35.599 clat (msec): min=60, max=9404, avg=2698.64, stdev=2720.94 00:15:35.599 lat (msec): min=68, max=9412, avg=2757.13, stdev=2765.97 00:15:35.599 clat percentiles (msec): 00:15:35.599 | 1.00th=[ 69], 5.00th=[ 182], 10.00th=[ 271], 20.00th=[ 550], 00:15:35.600 | 30.00th=[ 776], 40.00th=[ 961], 50.00th=[ 1267], 60.00th=[ 1653], 00:15:35.600 | 70.00th=[ 5067], 80.00th=[ 5336], 90.00th=[ 5537], 95.00th=[ 9329], 00:15:35.600 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:15:35.600 | 99.99th=[ 9463] 00:15:35.600 bw ( KiB/s): min=18432, max=71536, per=1.21%, avg=44984.00, stdev=37550.20, samples=2 00:15:35.600 iops : min= 18, max= 69, avg=43.50, stdev=36.06, samples=2 00:15:35.600 lat (msec) : 100=2.34%, 250=5.26%, 500=8.77%, 750=12.87%, 1000=12.28% 00:15:35.600 lat (msec) : 2000=23.39%, >=2000=35.09% 00:15:35.600 cpu : usr=0.02%, sys=0.69%, ctx=742, majf=0, minf=32769 00:15:35.600 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.7%, 16=9.4%, 32=18.7%, >=64=63.2% 00:15:35.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.600 complete : 0=0.0%, 4=97.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.2% 00:15:35.600 issued rwts: total=171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.600 job3: (groupid=0, jobs=1): err= 0: pid=3227781: Mon Dec 9 11:54:42 2024 00:15:35.600 read: IOPS=73, BW=73.2MiB/s (76.8MB/s)(735MiB/10035msec) 00:15:35.600 slat (usec): min=39, max=2109.1k, avg=13607.58, stdev=98351.07 00:15:35.600 clat (msec): min=29, max=5250, avg=939.95, stdev=389.71 00:15:35.600 lat (msec): min=35, max=5335, avg=953.56, stdev=421.94 00:15:35.600 clat percentiles (msec): 00:15:35.600 | 1.00th=[ 91], 5.00th=[ 384], 10.00th=[ 701], 20.00th=[ 743], 00:15:35.600 | 30.00th=[ 785], 40.00th=[ 818], 50.00th=[ 844], 60.00th=[ 894], 00:15:35.600 | 70.00th=[ 1070], 80.00th=[ 1200], 90.00th=[ 1385], 95.00th=[ 1469], 00:15:35.600 | 99.00th=[ 1636], 99.50th=[ 1670], 99.90th=[ 5269], 99.95th=[ 5269], 00:15:35.600 | 99.99th=[ 5269] 00:15:35.600 bw ( KiB/s): min=61440, max=192512, per=3.35%, avg=124463.40, stdev=48417.15, samples=10 00:15:35.600 iops : min= 60, max= 188, avg=121.50, stdev=47.32, samples=10 00:15:35.600 lat (msec) : 50=0.41%, 100=0.68%, 250=2.18%, 500=2.72%, 750=17.28% 00:15:35.600 lat (msec) : 1000=43.54%, 2000=32.79%, >=2000=0.41% 00:15:35.600 cpu : usr=0.03%, sys=1.20%, ctx=1442, majf=0, minf=32769 00:15:35.600 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:15:35.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.600 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.600 issued rwts: total=735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.600 job3: (groupid=0, jobs=1): err= 0: pid=3227782: Mon Dec 9 11:54:42 2024 00:15:35.600 read: IOPS=33, BW=33.2MiB/s (34.8MB/s)(334MiB/10052msec) 00:15:35.600 slat (usec): min=369, max=2109.1k, avg=29942.56, stdev=164094.15 00:15:35.600 clat (msec): min=49, max=6015, avg=1991.24, stdev=1271.23 00:15:35.600 lat (msec): min=55, max=6095, avg=2021.18, stdev=1293.33 00:15:35.600 clat percentiles (msec): 00:15:35.600 | 1.00th=[ 60], 5.00th=[ 123], 10.00th=[ 439], 20.00th=[ 835], 00:15:35.600 | 30.00th=[ 1167], 40.00th=[ 1267], 50.00th=[ 1334], 60.00th=[ 2400], 00:15:35.600 | 70.00th=[ 3037], 80.00th=[ 3406], 90.00th=[ 3473], 95.00th=[ 3473], 00:15:35.600 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:15:35.600 | 99.99th=[ 6007] 00:15:35.600 bw ( KiB/s): min= 2048, max=131072, per=1.63%, avg=60469.29, stdev=44203.69, samples=7 00:15:35.600 iops : min= 2, max= 128, avg=59.00, stdev=43.14, samples=7 00:15:35.600 lat (msec) : 50=0.30%, 100=3.89%, 250=2.69%, 500=4.19%, 750=5.69% 00:15:35.600 lat (msec) : 1000=8.08%, 2000=26.35%, >=2000=48.80% 00:15:35.600 cpu : usr=0.02%, sys=0.83%, ctx=1067, majf=0, minf=32769 00:15:35.600 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.1% 00:15:35.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.600 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:15:35.600 issued rwts: total=334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.600 job3: (groupid=0, jobs=1): err= 0: pid=3227783: Mon Dec 9 11:54:42 2024 00:15:35.600 read: IOPS=39, BW=39.9MiB/s (41.9MB/s)(402MiB/10063msec) 00:15:35.600 slat (usec): min=55, max=2057.6k, avg=24870.51, stdev=151235.60 00:15:35.600 clat (msec): min=61, max=7361, avg=1507.71, stdev=1693.67 00:15:35.600 lat (msec): min=72, max=7363, avg=1532.58, stdev=1722.51 00:15:35.600 clat percentiles (msec): 00:15:35.600 | 1.00th=[ 90], 5.00th=[ 266], 10.00th=[ 468], 20.00th=[ 535], 00:15:35.600 | 30.00th=[ 575], 40.00th=[ 625], 50.00th=[ 835], 60.00th=[ 1301], 00:15:35.600 | 70.00th=[ 1804], 80.00th=[ 1921], 90.00th=[ 2903], 95.00th=[ 7349], 00:15:35.600 | 99.00th=[ 7349], 99.50th=[ 7349], 99.90th=[ 7349], 99.95th=[ 7349], 00:15:35.600 | 99.99th=[ 7349] 00:15:35.600 bw ( KiB/s): min=22528, max=217088, per=2.53%, avg=93866.67, stdev=69359.40, samples=6 00:15:35.600 iops : min= 22, max= 212, avg=91.67, stdev=67.73, samples=6 00:15:35.600 lat (msec) : 100=1.49%, 250=2.99%, 500=6.47%, 750=37.06%, 1000=6.72% 00:15:35.600 lat (msec) : 2000=31.34%, >=2000=13.93% 00:15:35.600 cpu : usr=0.05%, sys=1.21%, ctx=1042, majf=0, minf=32769 00:15:35.600 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.3% 00:15:35.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.600 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:35.600 issued rwts: total=402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.600 job3: (groupid=0, jobs=1): err= 0: pid=3227784: Mon Dec 9 11:54:42 2024 00:15:35.600 read: IOPS=22, BW=22.0MiB/s (23.1MB/s)(222MiB/10070msec) 00:15:35.600 slat (usec): min=89, max=2078.0k, avg=45075.30, stdev=261016.74 00:15:35.600 clat (msec): min=61, max=9098, avg=2308.08, stdev=2852.95 00:15:35.600 lat (msec): min=100, max=9103, avg=2353.16, stdev=2885.96 00:15:35.600 clat percentiles (msec): 00:15:35.600 | 1.00th=[ 114], 5.00th=[ 257], 10.00th=[ 451], 20.00th=[ 634], 00:15:35.600 | 30.00th=[ 835], 40.00th=[ 961], 50.00th=[ 1028], 60.00th=[ 1099], 00:15:35.600 | 70.00th=[ 1116], 80.00th=[ 5269], 90.00th=[ 7416], 95.00th=[ 9060], 00:15:35.600 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:15:35.600 | 99.99th=[ 9060] 00:15:35.600 bw ( KiB/s): min=75776, max=118784, per=2.62%, avg=97280.00, stdev=30411.25, samples=2 00:15:35.600 iops : min= 74, max= 116, avg=95.00, stdev=29.70, samples=2 00:15:35.600 lat (msec) : 100=0.45%, 250=4.50%, 500=9.01%, 750=8.11%, 1000=22.52% 00:15:35.600 lat (msec) : 2000=31.98%, >=2000=23.42% 00:15:35.600 cpu : usr=0.00%, sys=0.88%, ctx=538, majf=0, minf=32769 00:15:35.600 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.4%, >=64=71.6% 00:15:35.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.600 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:15:35.600 issued rwts: total=222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.600 job3: (groupid=0, jobs=1): err= 0: pid=3227785: Mon Dec 9 11:54:42 2024 00:15:35.600 read: IOPS=13, BW=13.9MiB/s (14.6MB/s)(140MiB/10077msec) 00:15:35.600 slat (usec): min=755, max=2090.1k, avg=71549.22, stdev=325009.38 00:15:35.600 clat (msec): min=58, max=9899, avg=3503.03, stdev=3759.00 00:15:35.600 lat (msec): min=79, max=9958, avg=3574.58, stdev=3788.14 00:15:35.600 clat percentiles (msec): 00:15:35.600 | 1.00th=[ 80], 5.00th=[ 279], 10.00th=[ 498], 20.00th=[ 818], 00:15:35.600 | 30.00th=[ 1183], 40.00th=[ 1385], 50.00th=[ 1502], 60.00th=[ 1569], 00:15:35.600 | 70.00th=[ 1854], 80.00th=[ 9731], 90.00th=[ 9866], 95.00th=[ 9866], 00:15:35.600 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:15:35.600 | 99.99th=[ 9866] 00:15:35.600 bw ( KiB/s): min=25748, max=25748, per=0.69%, avg=25748.00, stdev= 0.00, samples=1 00:15:35.600 iops : min= 25, max= 25, avg=25.00, stdev= 0.00, samples=1 00:15:35.600 lat (msec) : 100=2.14%, 250=2.14%, 500=5.71%, 750=6.43%, 1000=7.86% 00:15:35.600 lat (msec) : 2000=45.71%, >=2000=30.00% 00:15:35.600 cpu : usr=0.01%, sys=0.72%, ctx=619, majf=0, minf=32769 00:15:35.600 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.7%, 16=11.4%, 32=22.9%, >=64=55.0% 00:15:35.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.600 complete : 0=0.0%, 4=92.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.1% 00:15:35.600 issued rwts: total=140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.600 job3: (groupid=0, jobs=1): err= 0: pid=3227786: Mon Dec 9 11:54:42 2024 00:15:35.600 read: IOPS=4, BW=4644KiB/s (4755kB/s)(47.0MiB/10364msec) 00:15:35.600 slat (usec): min=770, max=2079.0k, avg=218748.07, stdev=611908.01 00:15:35.600 clat (msec): min=82, max=10361, avg=7021.94, stdev=3310.88 00:15:35.600 lat (msec): min=2106, max=10363, avg=7240.69, stdev=3179.43 00:15:35.600 clat percentiles (msec): 00:15:35.600 | 1.00th=[ 83], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4245], 00:15:35.600 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[ 8658], 00:15:35.600 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:15:35.600 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.600 | 99.99th=[10402] 00:15:35.600 lat (msec) : 100=2.13%, >=2000=97.87% 00:15:35.600 cpu : usr=0.00%, sys=0.39%, ctx=76, majf=0, minf=12033 00:15:35.600 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:15:35.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.600 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:35.600 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.600 job3: (groupid=0, jobs=1): err= 0: pid=3227787: Mon Dec 9 11:54:42 2024 00:15:35.600 read: IOPS=61, BW=61.6MiB/s (64.6MB/s)(620MiB/10064msec) 00:15:35.600 slat (usec): min=53, max=2113.4k, avg=16126.83, stdev=144721.37 00:15:35.600 clat (msec): min=62, max=8025, avg=1972.98, stdev=2787.59 00:15:35.600 lat (msec): min=63, max=8027, avg=1989.11, stdev=2797.04 00:15:35.600 clat percentiles (msec): 00:15:35.600 | 1.00th=[ 71], 5.00th=[ 192], 10.00th=[ 351], 20.00th=[ 388], 00:15:35.600 | 30.00th=[ 409], 40.00th=[ 460], 50.00th=[ 518], 60.00th=[ 575], 00:15:35.600 | 70.00th=[ 1099], 80.00th=[ 4732], 90.00th=[ 7483], 95.00th=[ 7752], 00:15:35.600 | 99.00th=[ 7953], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:15:35.600 | 99.99th=[ 8020] 00:15:35.600 bw ( KiB/s): min= 6144, max=319488, per=3.00%, avg=111403.11, stdev=100917.52, samples=9 00:15:35.600 iops : min= 6, max= 312, avg=108.78, stdev=98.55, samples=9 00:15:35.600 lat (msec) : 100=1.77%, 250=5.00%, 500=40.32%, 750=18.55%, 1000=2.90% 00:15:35.600 lat (msec) : 2000=10.00%, >=2000=21.45% 00:15:35.600 cpu : usr=0.03%, sys=1.31%, ctx=1166, majf=0, minf=32769 00:15:35.600 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:15:35.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.600 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.600 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.601 job3: (groupid=0, jobs=1): err= 0: pid=3227788: Mon Dec 9 11:54:42 2024 00:15:35.601 read: IOPS=17, BW=17.5MiB/s (18.4MB/s)(176MiB/10033msec) 00:15:35.601 slat (usec): min=387, max=2120.3k, avg=56843.56, stdev=295627.71 00:15:35.601 clat (msec): min=27, max=7870, avg=1208.49, stdev=1555.79 00:15:35.601 lat (msec): min=32, max=9564, avg=1265.33, stdev=1678.54 00:15:35.601 clat percentiles (msec): 00:15:35.601 | 1.00th=[ 33], 5.00th=[ 80], 10.00th=[ 169], 20.00th=[ 338], 00:15:35.601 | 30.00th=[ 489], 40.00th=[ 659], 50.00th=[ 818], 60.00th=[ 1028], 00:15:35.601 | 70.00th=[ 1318], 80.00th=[ 1519], 90.00th=[ 1636], 95.00th=[ 5738], 00:15:35.601 | 99.00th=[ 7819], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886], 00:15:35.601 | 99.99th=[ 7886] 00:15:35.601 lat (msec) : 50=2.84%, 100=3.41%, 250=9.09%, 500=15.34%, 750=15.91% 00:15:35.601 lat (msec) : 1000=10.80%, 2000=35.80%, >=2000=6.82% 00:15:35.601 cpu : usr=0.02%, sys=0.63%, ctx=702, majf=0, minf=32769 00:15:35.601 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.5%, 16=9.1%, 32=18.2%, >=64=64.2% 00:15:35.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.601 complete : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0% 00:15:35.601 issued rwts: total=176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.601 job3: (groupid=0, jobs=1): err= 0: pid=3227789: Mon Dec 9 11:54:42 2024 00:15:35.601 read: IOPS=84, BW=85.0MiB/s (89.1MB/s)(852MiB/10025msec) 00:15:35.601 slat (usec): min=39, max=2044.3k, avg=11731.62, stdev=90067.07 00:15:35.601 clat (msec): min=23, max=4699, avg=980.55, stdev=723.64 00:15:35.601 lat (msec): min=25, max=4716, avg=992.29, stdev=735.02 00:15:35.601 clat percentiles (msec): 00:15:35.601 | 1.00th=[ 53], 5.00th=[ 334], 10.00th=[ 477], 20.00th=[ 502], 00:15:35.601 | 30.00th=[ 550], 40.00th=[ 659], 50.00th=[ 869], 60.00th=[ 995], 00:15:35.601 | 70.00th=[ 1083], 80.00th=[ 1133], 90.00th=[ 1770], 95.00th=[ 1938], 00:15:35.601 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:15:35.601 | 99.99th=[ 4732] 00:15:35.601 bw ( KiB/s): min=40960, max=266240, per=3.67%, avg=136192.00, stdev=79326.04, samples=10 00:15:35.601 iops : min= 40, max= 260, avg=133.00, stdev=77.47, samples=10 00:15:35.601 lat (msec) : 50=0.94%, 100=0.82%, 250=2.35%, 500=15.26%, 750=24.65% 00:15:35.601 lat (msec) : 1000=16.67%, 2000=36.27%, >=2000=3.05% 00:15:35.601 cpu : usr=0.08%, sys=1.46%, ctx=1304, majf=0, minf=32769 00:15:35.601 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:15:35.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.601 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.601 issued rwts: total=852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.601 job3: (groupid=0, jobs=1): err= 0: pid=3227790: Mon Dec 9 11:54:42 2024 00:15:35.601 read: IOPS=34, BW=34.6MiB/s (36.3MB/s)(347MiB/10030msec) 00:15:35.601 slat (usec): min=93, max=2050.3k, avg=28821.66, stdev=162210.45 00:15:35.601 clat (msec): min=27, max=6502, avg=1582.76, stdev=1447.30 00:15:35.601 lat (msec): min=37, max=8243, avg=1611.58, stdev=1492.26 00:15:35.601 clat percentiles (msec): 00:15:35.601 | 1.00th=[ 58], 5.00th=[ 130], 10.00th=[ 368], 20.00th=[ 506], 00:15:35.601 | 30.00th=[ 609], 40.00th=[ 693], 50.00th=[ 718], 60.00th=[ 1083], 00:15:35.601 | 70.00th=[ 2433], 80.00th=[ 2802], 90.00th=[ 4178], 95.00th=[ 4329], 00:15:35.601 | 99.00th=[ 4463], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:15:35.601 | 99.99th=[ 6477] 00:15:35.601 bw ( KiB/s): min=40960, max=182272, per=2.63%, avg=97621.33, stdev=74698.21, samples=3 00:15:35.601 iops : min= 40, max= 178, avg=95.33, stdev=72.95, samples=3 00:15:35.601 lat (msec) : 50=0.86%, 100=2.88%, 250=4.03%, 500=10.95%, 750=33.72% 00:15:35.601 lat (msec) : 1000=4.90%, 2000=6.92%, >=2000=35.73% 00:15:35.601 cpu : usr=0.02%, sys=0.83%, ctx=1002, majf=0, minf=32769 00:15:35.601 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.8% 00:15:35.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.601 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:15:35.601 issued rwts: total=347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.601 job4: (groupid=0, jobs=1): err= 0: pid=3227791: Mon Dec 9 11:54:42 2024 00:15:35.601 read: IOPS=66, BW=66.5MiB/s (69.8MB/s)(688MiB/10339msec) 00:15:35.601 slat (usec): min=27, max=2020.5k, avg=14933.69, stdev=97133.85 00:15:35.601 clat (msec): min=60, max=4783, avg=1788.00, stdev=1158.86 00:15:35.601 lat (msec): min=599, max=4786, avg=1802.94, stdev=1159.94 00:15:35.601 clat percentiles (msec): 00:15:35.601 | 1.00th=[ 600], 5.00th=[ 617], 10.00th=[ 726], 20.00th=[ 835], 00:15:35.601 | 30.00th=[ 969], 40.00th=[ 1116], 50.00th=[ 1401], 60.00th=[ 1636], 00:15:35.601 | 70.00th=[ 2140], 80.00th=[ 2567], 90.00th=[ 3641], 95.00th=[ 4597], 00:15:35.601 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:15:35.601 | 99.99th=[ 4799] 00:15:35.601 bw ( KiB/s): min=10240, max=217088, per=2.38%, avg=88221.54, stdev=59670.66, samples=13 00:15:35.601 iops : min= 10, max= 212, avg=86.15, stdev=58.27, samples=13 00:15:35.601 lat (msec) : 100=0.15%, 750=11.05%, 1000=24.27%, 2000=30.81%, >=2000=33.72% 00:15:35.601 cpu : usr=0.01%, sys=1.50%, ctx=1205, majf=0, minf=32769 00:15:35.601 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:15:35.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.601 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.601 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.601 job4: (groupid=0, jobs=1): err= 0: pid=3227792: Mon Dec 9 11:54:42 2024 00:15:35.601 read: IOPS=36, BW=36.1MiB/s (37.8MB/s)(373MiB/10334msec) 00:15:35.601 slat (usec): min=56, max=2051.8k, avg=27523.45, stdev=198269.19 00:15:35.601 clat (msec): min=64, max=5008, avg=2086.69, stdev=1717.74 00:15:35.601 lat (msec): min=459, max=5010, avg=2114.22, stdev=1722.22 00:15:35.601 clat percentiles (msec): 00:15:35.601 | 1.00th=[ 456], 5.00th=[ 502], 10.00th=[ 558], 20.00th=[ 651], 00:15:35.601 | 30.00th=[ 718], 40.00th=[ 735], 50.00th=[ 802], 60.00th=[ 2140], 00:15:35.601 | 70.00th=[ 4245], 80.00th=[ 4463], 90.00th=[ 4530], 95.00th=[ 4597], 00:15:35.601 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:15:35.601 | 99.99th=[ 5000] 00:15:35.601 bw ( KiB/s): min= 6144, max=268288, per=2.70%, avg=100352.00, stdev=114688.00, samples=5 00:15:35.601 iops : min= 6, max= 262, avg=98.00, stdev=112.00, samples=5 00:15:35.601 lat (msec) : 100=0.27%, 500=4.29%, 750=42.90%, 1000=5.36%, 2000=4.83% 00:15:35.601 lat (msec) : >=2000=42.36% 00:15:35.601 cpu : usr=0.01%, sys=1.15%, ctx=703, majf=0, minf=32769 00:15:35.601 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.6%, >=64=83.1% 00:15:35.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.601 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:35.601 issued rwts: total=373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.601 job4: (groupid=0, jobs=1): err= 0: pid=3227793: Mon Dec 9 11:54:42 2024 00:15:35.601 read: IOPS=4, BW=4937KiB/s (5056kB/s)(50.0MiB/10370msec) 00:15:35.601 slat (usec): min=413, max=2095.5k, avg=205678.19, stdev=599395.00 00:15:35.601 clat (msec): min=85, max=10367, avg=7601.97, stdev=3022.97 00:15:35.601 lat (msec): min=2105, max=10369, avg=7807.65, stdev=2845.76 00:15:35.601 clat percentiles (msec): 00:15:35.601 | 1.00th=[ 86], 5.00th=[ 2140], 10.00th=[ 4245], 20.00th=[ 4279], 00:15:35.601 | 30.00th=[ 4329], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10268], 00:15:35.601 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:15:35.601 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.601 | 99.99th=[10402] 00:15:35.601 lat (msec) : 100=2.00%, >=2000=98.00% 00:15:35.601 cpu : usr=0.00%, sys=0.29%, ctx=80, majf=0, minf=12801 00:15:35.601 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:15:35.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.601 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:35.601 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.601 job4: (groupid=0, jobs=1): err= 0: pid=3227794: Mon Dec 9 11:54:42 2024 00:15:35.601 read: IOPS=59, BW=59.5MiB/s (62.4MB/s)(620MiB/10425msec) 00:15:35.601 slat (usec): min=44, max=2216.8k, avg=16706.19, stdev=121093.87 00:15:35.601 clat (msec): min=64, max=4458, avg=1997.69, stdev=1354.39 00:15:35.601 lat (msec): min=554, max=6442, avg=2014.40, stdev=1361.01 00:15:35.601 clat percentiles (msec): 00:15:35.601 | 1.00th=[ 575], 5.00th=[ 600], 10.00th=[ 642], 20.00th=[ 802], 00:15:35.601 | 30.00th=[ 852], 40.00th=[ 1150], 50.00th=[ 1552], 60.00th=[ 2140], 00:15:35.601 | 70.00th=[ 2500], 80.00th=[ 3742], 90.00th=[ 4396], 95.00th=[ 4396], 00:15:35.601 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:15:35.601 | 99.99th=[ 4463] 00:15:35.601 bw ( KiB/s): min= 4096, max=204800, per=2.47%, avg=91592.45, stdev=69415.38, samples=11 00:15:35.601 iops : min= 4, max= 200, avg=89.36, stdev=67.84, samples=11 00:15:35.601 lat (msec) : 100=0.16%, 750=14.52%, 1000=19.68%, 2000=25.00%, >=2000=40.65% 00:15:35.601 cpu : usr=0.06%, sys=1.21%, ctx=1034, majf=0, minf=32769 00:15:35.601 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:15:35.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.601 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.601 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.601 job4: (groupid=0, jobs=1): err= 0: pid=3227795: Mon Dec 9 11:54:42 2024 00:15:35.601 read: IOPS=25, BW=25.8MiB/s (27.0MB/s)(266MiB/10327msec) 00:15:35.601 slat (usec): min=71, max=2161.8k, avg=38507.31, stdev=227313.03 00:15:35.601 clat (msec): min=82, max=8246, avg=4425.25, stdev=3134.51 00:15:35.601 lat (msec): min=934, max=8248, avg=4463.76, stdev=3124.57 00:15:35.601 clat percentiles (msec): 00:15:35.601 | 1.00th=[ 936], 5.00th=[ 995], 10.00th=[ 1083], 20.00th=[ 1301], 00:15:35.601 | 30.00th=[ 1502], 40.00th=[ 1854], 50.00th=[ 1972], 60.00th=[ 7483], 00:15:35.601 | 70.00th=[ 7684], 80.00th=[ 7819], 90.00th=[ 7953], 95.00th=[ 8087], 00:15:35.601 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:15:35.601 | 99.99th=[ 8221] 00:15:35.601 bw ( KiB/s): min= 2048, max=137216, per=1.27%, avg=47104.00, stdev=62267.28, samples=6 00:15:35.601 iops : min= 2, max= 134, avg=46.00, stdev=60.81, samples=6 00:15:35.601 lat (msec) : 100=0.38%, 1000=5.64%, 2000=44.36%, >=2000=49.62% 00:15:35.601 cpu : usr=0.03%, sys=0.93%, ctx=523, majf=0, minf=32769 00:15:35.602 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.0%, 32=12.0%, >=64=76.3% 00:15:35.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.602 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:15:35.602 issued rwts: total=266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.602 job4: (groupid=0, jobs=1): err= 0: pid=3227796: Mon Dec 9 11:54:42 2024 00:15:35.602 read: IOPS=5, BW=5727KiB/s (5864kB/s)(58.0MiB/10371msec) 00:15:35.602 slat (usec): min=537, max=2086.7k, avg=177654.13, stdev=558161.62 00:15:35.602 clat (msec): min=66, max=10369, avg=7987.99, stdev=2780.45 00:15:35.602 lat (msec): min=2103, max=10370, avg=8165.64, stdev=2587.94 00:15:35.602 clat percentiles (msec): 00:15:35.602 | 1.00th=[ 67], 5.00th=[ 2123], 10.00th=[ 2198], 20.00th=[ 6409], 00:15:35.602 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10268], 00:15:35.602 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:15:35.602 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:15:35.602 | 99.99th=[10402] 00:15:35.602 lat (msec) : 100=1.72%, >=2000=98.28% 00:15:35.602 cpu : usr=0.00%, sys=0.46%, ctx=73, majf=0, minf=14849 00:15:35.602 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:15:35.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.602 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:35.602 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.602 job4: (groupid=0, jobs=1): err= 0: pid=3227797: Mon Dec 9 11:54:42 2024 00:15:35.602 read: IOPS=24, BW=24.3MiB/s (25.5MB/s)(253MiB/10415msec) 00:15:35.602 slat (usec): min=83, max=2067.0k, avg=40824.66, stdev=237583.97 00:15:35.602 clat (msec): min=84, max=9293, avg=4943.52, stdev=3406.27 00:15:35.602 lat (msec): min=1020, max=9312, avg=4984.34, stdev=3400.73 00:15:35.602 clat percentiles (msec): 00:15:35.602 | 1.00th=[ 1020], 5.00th=[ 1053], 10.00th=[ 1083], 20.00th=[ 1150], 00:15:35.602 | 30.00th=[ 1217], 40.00th=[ 2165], 50.00th=[ 5201], 60.00th=[ 7416], 00:15:35.602 | 70.00th=[ 8658], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9194], 00:15:35.602 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:15:35.602 | 99.99th=[ 9329] 00:15:35.602 bw ( KiB/s): min= 8192, max=92160, per=0.99%, avg=36575.43, stdev=32196.58, samples=7 00:15:35.602 iops : min= 8, max= 90, avg=35.57, stdev=31.57, samples=7 00:15:35.602 lat (msec) : 100=0.40%, 2000=38.74%, >=2000=60.87% 00:15:35.602 cpu : usr=0.00%, sys=0.73%, ctx=530, majf=0, minf=32769 00:15:35.602 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.6%, >=64=75.1% 00:15:35.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.602 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:15:35.602 issued rwts: total=253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.602 job4: (groupid=0, jobs=1): err= 0: pid=3227798: Mon Dec 9 11:54:42 2024 00:15:35.602 read: IOPS=27, BW=27.7MiB/s (29.1MB/s)(285MiB/10280msec) 00:15:35.602 slat (usec): min=37, max=2171.4k, avg=35981.68, stdev=210035.68 00:15:35.602 clat (msec): min=23, max=8574, avg=3886.55, stdev=1838.75 00:15:35.602 lat (msec): min=777, max=10144, avg=3922.53, stdev=1854.84 00:15:35.602 clat percentiles (msec): 00:15:35.602 | 1.00th=[ 776], 5.00th=[ 785], 10.00th=[ 793], 20.00th=[ 2366], 00:15:35.602 | 30.00th=[ 3104], 40.00th=[ 3473], 50.00th=[ 3943], 60.00th=[ 4279], 00:15:35.602 | 70.00th=[ 4597], 80.00th=[ 4732], 90.00th=[ 7617], 95.00th=[ 7684], 00:15:35.602 | 99.00th=[ 8490], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:15:35.602 | 99.99th=[ 8557] 00:15:35.602 bw ( KiB/s): min= 2048, max=75776, per=1.24%, avg=45933.71, stdev=27371.20, samples=7 00:15:35.602 iops : min= 2, max= 74, avg=44.86, stdev=26.73, samples=7 00:15:35.602 lat (msec) : 50=0.35%, 1000=10.53%, 2000=1.75%, >=2000=87.37% 00:15:35.602 cpu : usr=0.04%, sys=1.03%, ctx=430, majf=0, minf=32769 00:15:35.602 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=77.9% 00:15:35.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.602 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:15:35.602 issued rwts: total=285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.602 job4: (groupid=0, jobs=1): err= 0: pid=3227799: Mon Dec 9 11:54:42 2024 00:15:35.602 read: IOPS=38, BW=38.0MiB/s (39.9MB/s)(393MiB/10337msec) 00:15:35.602 slat (usec): min=36, max=2049.7k, avg=26128.50, stdev=184899.80 00:15:35.602 clat (msec): min=65, max=6544, avg=1678.84, stdev=1176.03 00:15:35.602 lat (msec): min=730, max=6545, avg=1704.97, stdev=1197.72 00:15:35.602 clat percentiles (msec): 00:15:35.602 | 1.00th=[ 735], 5.00th=[ 735], 10.00th=[ 735], 20.00th=[ 785], 00:15:35.602 | 30.00th=[ 810], 40.00th=[ 844], 50.00th=[ 869], 60.00th=[ 2265], 00:15:35.602 | 70.00th=[ 2433], 80.00th=[ 2601], 90.00th=[ 2802], 95.00th=[ 2869], 00:15:35.602 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:15:35.602 | 99.99th=[ 6544] 00:15:35.602 bw ( KiB/s): min=45056, max=188416, per=3.66%, avg=135680.00, stdev=62620.48, samples=4 00:15:35.602 iops : min= 44, max= 184, avg=132.50, stdev=61.15, samples=4 00:15:35.602 lat (msec) : 100=0.25%, 750=12.72%, 1000=43.26%, >=2000=43.77% 00:15:35.602 cpu : usr=0.01%, sys=1.17%, ctx=495, majf=0, minf=32769 00:15:35.602 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:15:35.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.602 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:35.602 issued rwts: total=393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.602 job4: (groupid=0, jobs=1): err= 0: pid=3227800: Mon Dec 9 11:54:42 2024 00:15:35.602 read: IOPS=89, BW=89.5MiB/s (93.9MB/s)(898MiB/10029msec) 00:15:35.602 slat (usec): min=39, max=2085.8k, avg=11129.53, stdev=130497.68 00:15:35.602 clat (msec): min=27, max=8365, avg=347.69, stdev=702.68 00:15:35.602 lat (msec): min=29, max=8451, avg=358.82, stdev=752.87 00:15:35.602 clat percentiles (msec): 00:15:35.602 | 1.00th=[ 44], 5.00th=[ 109], 10.00th=[ 199], 20.00th=[ 253], 00:15:35.602 | 30.00th=[ 255], 40.00th=[ 257], 50.00th=[ 257], 60.00th=[ 257], 00:15:35.602 | 70.00th=[ 259], 80.00th=[ 259], 90.00th=[ 271], 95.00th=[ 313], 00:15:35.602 | 99.00th=[ 4530], 99.50th=[ 6678], 99.90th=[ 8356], 99.95th=[ 8356], 00:15:35.602 | 99.99th=[ 8356] 00:15:35.602 bw ( KiB/s): min=57344, max=507904, per=9.60%, avg=356352.00, stdev=258956.62, samples=3 00:15:35.602 iops : min= 56, max= 496, avg=348.00, stdev=252.89, samples=3 00:15:35.602 lat (msec) : 50=1.45%, 100=3.01%, 250=10.47%, 500=82.29%, >=2000=2.78% 00:15:35.602 cpu : usr=0.06%, sys=1.67%, ctx=816, majf=0, minf=32769 00:15:35.602 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:15:35.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.602 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.602 issued rwts: total=898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.602 job4: (groupid=0, jobs=1): err= 0: pid=3227801: Mon Dec 9 11:54:42 2024 00:15:35.602 read: IOPS=34, BW=34.7MiB/s (36.3MB/s)(361MiB/10417msec) 00:15:35.602 slat (usec): min=65, max=3561.2k, avg=28774.12, stdev=226180.23 00:15:35.602 clat (msec): min=27, max=8493, avg=3498.51, stdev=2561.29 00:15:35.602 lat (msec): min=986, max=8598, avg=3527.29, stdev=2569.18 00:15:35.602 clat percentiles (msec): 00:15:35.602 | 1.00th=[ 1011], 5.00th=[ 1028], 10.00th=[ 1070], 20.00th=[ 1116], 00:15:35.602 | 30.00th=[ 1401], 40.00th=[ 1569], 50.00th=[ 1821], 60.00th=[ 3440], 00:15:35.602 | 70.00th=[ 5940], 80.00th=[ 7215], 90.00th=[ 7282], 95.00th=[ 7349], 00:15:35.602 | 99.00th=[ 7416], 99.50th=[ 7483], 99.90th=[ 8490], 99.95th=[ 8490], 00:15:35.602 | 99.99th=[ 8490] 00:15:35.602 bw ( KiB/s): min= 4096, max=137216, per=1.43%, avg=53015.89, stdev=49500.24, samples=9 00:15:35.602 iops : min= 4, max= 134, avg=51.67, stdev=48.42, samples=9 00:15:35.602 lat (msec) : 50=0.28%, 1000=0.55%, 2000=50.14%, >=2000=49.03% 00:15:35.602 cpu : usr=0.00%, sys=0.90%, ctx=709, majf=0, minf=32769 00:15:35.602 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.9%, >=64=82.5% 00:15:35.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.602 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:35.602 issued rwts: total=361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.602 job4: (groupid=0, jobs=1): err= 0: pid=3227802: Mon Dec 9 11:54:42 2024 00:15:35.602 read: IOPS=29, BW=29.7MiB/s (31.1MB/s)(307MiB/10347msec) 00:15:35.602 slat (usec): min=54, max=2066.9k, avg=33481.40, stdev=224053.49 00:15:35.602 clat (msec): min=66, max=8804, avg=3992.86, stdev=2980.69 00:15:35.602 lat (msec): min=869, max=8806, avg=4026.34, stdev=2983.02 00:15:35.602 clat percentiles (msec): 00:15:35.602 | 1.00th=[ 877], 5.00th=[ 877], 10.00th=[ 902], 20.00th=[ 936], 00:15:35.602 | 30.00th=[ 1028], 40.00th=[ 2022], 50.00th=[ 2802], 60.00th=[ 5000], 00:15:35.602 | 70.00th=[ 6879], 80.00th=[ 7013], 90.00th=[ 8490], 95.00th=[ 8658], 00:15:35.602 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:15:35.602 | 99.99th=[ 8792] 00:15:35.602 bw ( KiB/s): min=20480, max=112640, per=1.98%, avg=73318.40, stdev=42354.50, samples=5 00:15:35.602 iops : min= 20, max= 110, avg=71.60, stdev=41.36, samples=5 00:15:35.602 lat (msec) : 100=0.33%, 1000=26.06%, 2000=10.75%, >=2000=62.87% 00:15:35.602 cpu : usr=0.01%, sys=0.64%, ctx=565, majf=0, minf=32769 00:15:35.602 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.5% 00:15:35.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.602 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:15:35.602 issued rwts: total=307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.602 job4: (groupid=0, jobs=1): err= 0: pid=3227803: Mon Dec 9 11:54:42 2024 00:15:35.602 read: IOPS=19, BW=20.0MiB/s (20.9MB/s)(208MiB/10413msec) 00:15:35.602 slat (usec): min=93, max=2104.5k, avg=49743.54, stdev=260846.55 00:15:35.602 clat (msec): min=65, max=6458, avg=4004.15, stdev=1674.13 00:15:35.602 lat (msec): min=913, max=8562, avg=4053.89, stdev=1672.36 00:15:35.602 clat percentiles (msec): 00:15:35.602 | 1.00th=[ 911], 5.00th=[ 1070], 10.00th=[ 1200], 20.00th=[ 1754], 00:15:35.602 | 30.00th=[ 3473], 40.00th=[ 4111], 50.00th=[ 5067], 60.00th=[ 5134], 00:15:35.602 | 70.00th=[ 5201], 80.00th=[ 5269], 90.00th=[ 5537], 95.00th=[ 5537], 00:15:35.602 | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 6477], 99.95th=[ 6477], 00:15:35.602 | 99.99th=[ 6477] 00:15:35.602 bw ( KiB/s): min=12288, max=100352, per=1.10%, avg=40960.00, stdev=40306.25, samples=4 00:15:35.603 iops : min= 12, max= 98, avg=40.00, stdev=39.36, samples=4 00:15:35.603 lat (msec) : 100=0.48%, 1000=0.96%, 2000=23.56%, >=2000=75.00% 00:15:35.603 cpu : usr=0.00%, sys=0.76%, ctx=326, majf=0, minf=32769 00:15:35.603 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.7%, 32=15.4%, >=64=69.7% 00:15:35.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.603 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:15:35.603 issued rwts: total=208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.603 job5: (groupid=0, jobs=1): err= 0: pid=3227804: Mon Dec 9 11:54:42 2024 00:15:35.603 read: IOPS=38, BW=38.6MiB/s (40.5MB/s)(398MiB/10308msec) 00:15:35.603 slat (usec): min=393, max=2084.8k, avg=25732.29, stdev=190831.29 00:15:35.603 clat (msec): min=64, max=5026, avg=2068.65, stdev=1832.43 00:15:35.603 lat (msec): min=568, max=5030, avg=2094.38, stdev=1836.52 00:15:35.603 clat percentiles (msec): 00:15:35.603 | 1.00th=[ 567], 5.00th=[ 600], 10.00th=[ 617], 20.00th=[ 651], 00:15:35.603 | 30.00th=[ 693], 40.00th=[ 718], 50.00th=[ 743], 60.00th=[ 818], 00:15:35.603 | 70.00th=[ 4329], 80.00th=[ 4530], 90.00th=[ 4799], 95.00th=[ 4933], 00:15:35.603 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:15:35.603 | 99.99th=[ 5000] 00:15:35.603 bw ( KiB/s): min= 2048, max=192512, per=2.98%, avg=110592.00, stdev=92534.70, samples=5 00:15:35.603 iops : min= 2, max= 188, avg=108.00, stdev=90.37, samples=5 00:15:35.603 lat (msec) : 100=0.25%, 750=53.27%, 1000=8.29%, >=2000=38.19% 00:15:35.603 cpu : usr=0.02%, sys=0.90%, ctx=708, majf=0, minf=32769 00:15:35.603 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:15:35.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.603 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:35.603 issued rwts: total=398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.603 job5: (groupid=0, jobs=1): err= 0: pid=3227805: Mon Dec 9 11:54:42 2024 00:15:35.603 read: IOPS=44, BW=44.9MiB/s (47.1MB/s)(467MiB/10392msec) 00:15:35.603 slat (usec): min=40, max=2023.6k, avg=22048.86, stdev=169378.68 00:15:35.603 clat (msec): min=91, max=4596, avg=2018.53, stdev=1665.04 00:15:35.603 lat (msec): min=496, max=4598, avg=2040.58, stdev=1667.60 00:15:35.603 clat percentiles (msec): 00:15:35.603 | 1.00th=[ 502], 5.00th=[ 542], 10.00th=[ 584], 20.00th=[ 642], 00:15:35.603 | 30.00th=[ 785], 40.00th=[ 877], 50.00th=[ 961], 60.00th=[ 1003], 00:15:35.603 | 70.00th=[ 3977], 80.00th=[ 4329], 90.00th=[ 4463], 95.00th=[ 4530], 00:15:35.603 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:15:35.603 | 99.99th=[ 4597] 00:15:35.603 bw ( KiB/s): min= 4096, max=227328, per=3.12%, avg=115712.00, stdev=90267.79, samples=6 00:15:35.603 iops : min= 4, max= 222, avg=113.00, stdev=88.15, samples=6 00:15:35.603 lat (msec) : 100=0.21%, 500=0.86%, 750=25.48%, 1000=33.62%, 2000=2.78% 00:15:35.603 lat (msec) : >=2000=37.04% 00:15:35.603 cpu : usr=0.03%, sys=1.29%, ctx=404, majf=0, minf=32769 00:15:35.603 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:15:35.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.603 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:35.603 issued rwts: total=467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.603 job5: (groupid=0, jobs=1): err= 0: pid=3227806: Mon Dec 9 11:54:42 2024 00:15:35.603 read: IOPS=72, BW=72.1MiB/s (75.6MB/s)(722MiB/10015msec) 00:15:35.603 slat (usec): min=31, max=2061.5k, avg=13848.00, stdev=111883.88 00:15:35.603 clat (msec): min=14, max=5763, avg=1092.30, stdev=1282.28 00:15:35.603 lat (msec): min=15, max=5765, avg=1106.15, stdev=1296.87 00:15:35.603 clat percentiles (msec): 00:15:35.603 | 1.00th=[ 22], 5.00th=[ 51], 10.00th=[ 87], 20.00th=[ 130], 00:15:35.603 | 30.00th=[ 132], 40.00th=[ 451], 50.00th=[ 523], 60.00th=[ 659], 00:15:35.603 | 70.00th=[ 1603], 80.00th=[ 2232], 90.00th=[ 2836], 95.00th=[ 3104], 00:15:35.603 | 99.00th=[ 5738], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:15:35.603 | 99.99th=[ 5738] 00:15:35.603 bw ( KiB/s): min= 2048, max=530432, per=4.10%, avg=152320.00, stdev=176915.29, samples=8 00:15:35.603 iops : min= 2, max= 518, avg=148.75, stdev=172.77, samples=8 00:15:35.603 lat (msec) : 20=0.83%, 50=4.16%, 100=6.93%, 250=22.02%, 500=12.60% 00:15:35.603 lat (msec) : 750=16.90%, 1000=4.02%, 2000=11.63%, >=2000=20.91% 00:15:35.603 cpu : usr=0.00%, sys=1.15%, ctx=1459, majf=0, minf=32769 00:15:35.603 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:15:35.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.603 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.603 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.603 job5: (groupid=0, jobs=1): err= 0: pid=3227807: Mon Dec 9 11:54:42 2024 00:15:35.603 read: IOPS=94, BW=94.9MiB/s (99.5MB/s)(950MiB/10014msec) 00:15:35.603 slat (usec): min=27, max=1346.8k, avg=10522.96, stdev=51804.55 00:15:35.603 clat (msec): min=12, max=3191, avg=1270.61, stdev=807.71 00:15:35.603 lat (msec): min=13, max=3194, avg=1281.14, stdev=810.56 00:15:35.603 clat percentiles (msec): 00:15:35.603 | 1.00th=[ 72], 5.00th=[ 262], 10.00th=[ 527], 20.00th=[ 584], 00:15:35.603 | 30.00th=[ 642], 40.00th=[ 919], 50.00th=[ 1099], 60.00th=[ 1334], 00:15:35.603 | 70.00th=[ 1536], 80.00th=[ 2039], 90.00th=[ 2467], 95.00th=[ 3071], 00:15:35.603 | 99.00th=[ 3138], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:15:35.603 | 99.99th=[ 3205] 00:15:35.603 bw ( KiB/s): min=34816, max=245760, per=2.70%, avg=100078.93, stdev=64195.98, samples=15 00:15:35.603 iops : min= 34, max= 240, avg=97.73, stdev=62.69, samples=15 00:15:35.603 lat (msec) : 20=0.32%, 50=0.21%, 100=1.58%, 250=2.63%, 500=4.74% 00:15:35.603 lat (msec) : 750=27.16%, 1000=10.21%, 2000=32.42%, >=2000=20.74% 00:15:35.603 cpu : usr=0.08%, sys=1.26%, ctx=1403, majf=0, minf=32769 00:15:35.603 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:15:35.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.603 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.603 issued rwts: total=950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.603 job5: (groupid=0, jobs=1): err= 0: pid=3227808: Mon Dec 9 11:54:42 2024 00:15:35.603 read: IOPS=117, BW=118MiB/s (123MB/s)(1180MiB/10035msec) 00:15:35.603 slat (usec): min=36, max=2078.6k, avg=8476.80, stdev=79274.71 00:15:35.603 clat (msec): min=27, max=5207, avg=621.00, stdev=473.45 00:15:35.603 lat (msec): min=48, max=5297, avg=629.48, stdev=494.35 00:15:35.603 clat percentiles (msec): 00:15:35.603 | 1.00th=[ 129], 5.00th=[ 262], 10.00th=[ 271], 20.00th=[ 284], 00:15:35.603 | 30.00th=[ 300], 40.00th=[ 472], 50.00th=[ 502], 60.00th=[ 542], 00:15:35.603 | 70.00th=[ 625], 80.00th=[ 726], 90.00th=[ 1485], 95.00th=[ 1737], 00:15:35.603 | 99.00th=[ 1888], 99.50th=[ 1972], 99.90th=[ 3574], 99.95th=[ 5201], 00:15:35.603 | 99.99th=[ 5201] 00:15:35.603 bw ( KiB/s): min=30720, max=477184, per=5.81%, avg=215486.80, stdev=148671.13, samples=10 00:15:35.603 iops : min= 30, max= 466, avg=210.40, stdev=145.21, samples=10 00:15:35.603 lat (msec) : 50=0.17%, 100=0.51%, 250=2.80%, 500=46.19%, 750=32.20% 00:15:35.603 lat (msec) : 1000=4.66%, 2000=13.14%, >=2000=0.34% 00:15:35.603 cpu : usr=0.02%, sys=1.61%, ctx=1943, majf=0, minf=32769 00:15:35.603 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7% 00:15:35.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.603 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.603 issued rwts: total=1180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.603 job5: (groupid=0, jobs=1): err= 0: pid=3227809: Mon Dec 9 11:54:42 2024 00:15:35.603 read: IOPS=32, BW=32.3MiB/s (33.9MB/s)(326MiB/10083msec) 00:15:35.603 slat (usec): min=30, max=2053.4k, avg=30726.47, stdev=185994.16 00:15:35.603 clat (msec): min=64, max=5951, avg=2807.31, stdev=1830.21 00:15:35.603 lat (msec): min=114, max=5962, avg=2838.04, stdev=1835.99 00:15:35.603 clat percentiles (msec): 00:15:35.603 | 1.00th=[ 118], 5.00th=[ 309], 10.00th=[ 693], 20.00th=[ 894], 00:15:35.603 | 30.00th=[ 1150], 40.00th=[ 1469], 50.00th=[ 3507], 60.00th=[ 3775], 00:15:35.603 | 70.00th=[ 4144], 80.00th=[ 4245], 90.00th=[ 5470], 95.00th=[ 5738], 00:15:35.603 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:15:35.603 | 99.99th=[ 5940] 00:15:35.603 bw ( KiB/s): min=22528, max=129024, per=1.56%, avg=57929.14, stdev=38924.83, samples=7 00:15:35.603 iops : min= 22, max= 126, avg=56.57, stdev=38.01, samples=7 00:15:35.603 lat (msec) : 100=0.31%, 250=4.29%, 500=3.07%, 750=6.44%, 1000=12.88% 00:15:35.603 lat (msec) : 2000=17.18%, >=2000=55.83% 00:15:35.603 cpu : usr=0.01%, sys=1.08%, ctx=921, majf=0, minf=32769 00:15:35.603 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=4.9%, 32=9.8%, >=64=80.7% 00:15:35.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.603 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:15:35.603 issued rwts: total=326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.603 job5: (groupid=0, jobs=1): err= 0: pid=3227810: Mon Dec 9 11:54:42 2024 00:15:35.603 read: IOPS=68, BW=68.3MiB/s (71.6MB/s)(688MiB/10075msec) 00:15:35.603 slat (usec): min=183, max=2065.1k, avg=14542.21, stdev=79552.21 00:15:35.603 clat (msec): min=63, max=3900, avg=1752.80, stdev=1037.85 00:15:35.603 lat (msec): min=76, max=3928, avg=1767.34, stdev=1041.78 00:15:35.603 clat percentiles (msec): 00:15:35.604 | 1.00th=[ 121], 5.00th=[ 380], 10.00th=[ 642], 20.00th=[ 1133], 00:15:35.604 | 30.00th=[ 1351], 40.00th=[ 1469], 50.00th=[ 1536], 60.00th=[ 1586], 00:15:35.604 | 70.00th=[ 1636], 80.00th=[ 1737], 90.00th=[ 3775], 95.00th=[ 3842], 00:15:35.604 | 99.00th=[ 3876], 99.50th=[ 3876], 99.90th=[ 3910], 99.95th=[ 3910], 00:15:35.604 | 99.99th=[ 3910] 00:15:35.604 bw ( KiB/s): min=10240, max=118784, per=2.06%, avg=76458.67, stdev=31525.39, samples=15 00:15:35.604 iops : min= 10, max= 116, avg=74.67, stdev=30.79, samples=15 00:15:35.604 lat (msec) : 100=0.58%, 250=2.33%, 500=4.07%, 750=4.80%, 1000=4.36% 00:15:35.604 lat (msec) : 2000=65.41%, >=2000=18.46% 00:15:35.604 cpu : usr=0.07%, sys=2.32%, ctx=1257, majf=0, minf=32769 00:15:35.604 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:15:35.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.604 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.604 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.604 job5: (groupid=0, jobs=1): err= 0: pid=3227811: Mon Dec 9 11:54:42 2024 00:15:35.604 read: IOPS=58, BW=58.8MiB/s (61.7MB/s)(592MiB/10067msec) 00:15:35.604 slat (usec): min=34, max=2047.4k, avg=16892.51, stdev=119272.07 00:15:35.604 clat (msec): min=63, max=4871, avg=1654.92, stdev=1434.58 00:15:35.604 lat (msec): min=97, max=5444, avg=1671.81, stdev=1444.32 00:15:35.604 clat percentiles (msec): 00:15:35.604 | 1.00th=[ 163], 5.00th=[ 249], 10.00th=[ 268], 20.00th=[ 330], 00:15:35.604 | 30.00th=[ 401], 40.00th=[ 502], 50.00th=[ 1011], 60.00th=[ 1469], 00:15:35.604 | 70.00th=[ 2937], 80.00th=[ 3339], 90.00th=[ 3507], 95.00th=[ 3641], 00:15:35.604 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:15:35.604 | 99.99th=[ 4866] 00:15:35.604 bw ( KiB/s): min=14336, max=290816, per=3.21%, avg=119040.00, stdev=90174.73, samples=8 00:15:35.604 iops : min= 14, max= 284, avg=116.25, stdev=88.06, samples=8 00:15:35.604 lat (msec) : 100=0.34%, 250=4.73%, 500=34.80%, 750=3.21%, 1000=6.59% 00:15:35.604 lat (msec) : 2000=12.33%, >=2000=38.01% 00:15:35.604 cpu : usr=0.00%, sys=1.08%, ctx=1392, majf=0, minf=32769 00:15:35.604 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.4% 00:15:35.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.604 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.604 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.604 job5: (groupid=0, jobs=1): err= 0: pid=3227812: Mon Dec 9 11:54:42 2024 00:15:35.604 read: IOPS=32, BW=32.1MiB/s (33.6MB/s)(331MiB/10315msec) 00:15:35.604 slat (usec): min=372, max=2069.9k, avg=30961.24, stdev=212322.54 00:15:35.604 clat (msec): min=64, max=5026, avg=2470.44, stdev=1811.49 00:15:35.604 lat (msec): min=743, max=5029, avg=2501.40, stdev=1810.76 00:15:35.604 clat percentiles (msec): 00:15:35.604 | 1.00th=[ 743], 5.00th=[ 760], 10.00th=[ 810], 20.00th=[ 919], 00:15:35.604 | 30.00th=[ 944], 40.00th=[ 953], 50.00th=[ 969], 60.00th=[ 3004], 00:15:35.604 | 70.00th=[ 4463], 80.00th=[ 4665], 90.00th=[ 4799], 95.00th=[ 4933], 00:15:35.604 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:15:35.604 | 99.99th=[ 5000] 00:15:35.604 bw ( KiB/s): min= 2048, max=169984, per=2.24%, avg=83148.80, stdev=76855.96, samples=5 00:15:35.604 iops : min= 2, max= 166, avg=81.20, stdev=75.05, samples=5 00:15:35.604 lat (msec) : 100=0.30%, 750=3.63%, 1000=51.06%, >=2000=45.02% 00:15:35.604 cpu : usr=0.01%, sys=0.95%, ctx=693, majf=0, minf=32769 00:15:35.604 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.7%, >=64=81.0% 00:15:35.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.604 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:15:35.604 issued rwts: total=331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.604 job5: (groupid=0, jobs=1): err= 0: pid=3227813: Mon Dec 9 11:54:42 2024 00:15:35.604 read: IOPS=109, BW=109MiB/s (114MB/s)(1126MiB/10328msec) 00:15:35.604 slat (usec): min=35, max=2049.5k, avg=9106.65, stdev=113255.23 00:15:35.604 clat (msec): min=66, max=6234, avg=685.04, stdev=1142.86 00:15:35.604 lat (msec): min=235, max=6235, avg=694.14, stdev=1155.37 00:15:35.604 clat percentiles (msec): 00:15:35.604 | 1.00th=[ 236], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 243], 00:15:35.604 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:15:35.604 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 2333], 95.00th=[ 2433], 00:15:35.604 | 99.00th=[ 6141], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:15:35.604 | 99.99th=[ 6208] 00:15:35.604 bw ( KiB/s): min=141312, max=536576, per=11.01%, avg=408780.80, stdev=179679.76, samples=5 00:15:35.604 iops : min= 138, max= 524, avg=399.20, stdev=175.47, samples=5 00:15:35.604 lat (msec) : 100=0.09%, 250=78.06%, 500=5.77%, 2000=0.09%, >=2000=15.99% 00:15:35.604 cpu : usr=0.11%, sys=1.67%, ctx=993, majf=0, minf=32769 00:15:35.604 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:15:35.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.604 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.604 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.604 job5: (groupid=0, jobs=1): err= 0: pid=3227814: Mon Dec 9 11:54:42 2024 00:15:35.604 read: IOPS=71, BW=71.1MiB/s (74.6MB/s)(716MiB/10068msec) 00:15:35.604 slat (usec): min=34, max=2057.2k, avg=13978.54, stdev=123880.79 00:15:35.604 clat (msec): min=56, max=4652, avg=1139.69, stdev=928.31 00:15:35.604 lat (msec): min=68, max=4656, avg=1153.67, stdev=937.98 00:15:35.604 clat percentiles (msec): 00:15:35.604 | 1.00th=[ 203], 5.00th=[ 472], 10.00th=[ 518], 20.00th=[ 600], 00:15:35.604 | 30.00th=[ 634], 40.00th=[ 667], 50.00th=[ 735], 60.00th=[ 810], 00:15:35.604 | 70.00th=[ 927], 80.00th=[ 2467], 90.00th=[ 2601], 95.00th=[ 2702], 00:15:35.604 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:15:35.604 | 99.99th=[ 4665] 00:15:35.604 bw ( KiB/s): min=22528, max=270336, per=4.06%, avg=150784.00, stdev=79807.74, samples=8 00:15:35.604 iops : min= 22, max= 264, avg=147.25, stdev=77.94, samples=8 00:15:35.604 lat (msec) : 100=0.28%, 250=1.26%, 500=6.01%, 750=43.72%, 1000=20.53% 00:15:35.604 lat (msec) : 2000=7.40%, >=2000=20.81% 00:15:35.604 cpu : usr=0.01%, sys=1.13%, ctx=1388, majf=0, minf=32769 00:15:35.604 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:15:35.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.604 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.604 issued rwts: total=716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.604 job5: (groupid=0, jobs=1): err= 0: pid=3227815: Mon Dec 9 11:54:42 2024 00:15:35.604 read: IOPS=70, BW=71.0MiB/s (74.4MB/s)(714MiB/10060msec) 00:15:35.604 slat (usec): min=59, max=2071.3k, avg=13997.62, stdev=78281.30 00:15:35.604 clat (msec): min=58, max=3976, avg=1590.28, stdev=1027.95 00:15:35.604 lat (msec): min=61, max=3993, avg=1604.28, stdev=1032.36 00:15:35.604 clat percentiles (msec): 00:15:35.604 | 1.00th=[ 84], 5.00th=[ 253], 10.00th=[ 550], 20.00th=[ 1045], 00:15:35.604 | 30.00th=[ 1150], 40.00th=[ 1250], 50.00th=[ 1318], 60.00th=[ 1452], 00:15:35.604 | 70.00th=[ 1552], 80.00th=[ 1603], 90.00th=[ 3641], 95.00th=[ 3910], 00:15:35.604 | 99.00th=[ 3977], 99.50th=[ 3977], 99.90th=[ 3977], 99.95th=[ 3977], 00:15:35.604 | 99.99th=[ 3977] 00:15:35.604 bw ( KiB/s): min= 6144, max=147456, per=2.49%, avg=92475.08, stdev=33483.87, samples=13 00:15:35.604 iops : min= 6, max= 144, avg=90.31, stdev=32.70, samples=13 00:15:35.604 lat (msec) : 100=1.40%, 250=3.50%, 500=4.20%, 750=3.92%, 1000=4.06% 00:15:35.604 lat (msec) : 2000=66.39%, >=2000=16.53% 00:15:35.604 cpu : usr=0.07%, sys=2.19%, ctx=1263, majf=0, minf=32769 00:15:35.604 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:15:35.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.604 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.604 issued rwts: total=714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.604 job5: (groupid=0, jobs=1): err= 0: pid=3227816: Mon Dec 9 11:54:42 2024 00:15:35.604 read: IOPS=55, BW=55.6MiB/s (58.3MB/s)(558MiB/10043msec) 00:15:35.604 slat (usec): min=32, max=2121.5k, avg=17943.08, stdev=116043.78 00:15:35.604 clat (msec): min=27, max=5437, avg=1284.38, stdev=638.98 00:15:35.604 lat (msec): min=48, max=5442, avg=1302.33, stdev=662.89 00:15:35.604 clat percentiles (msec): 00:15:35.604 | 1.00th=[ 67], 5.00th=[ 372], 10.00th=[ 397], 20.00th=[ 827], 00:15:35.604 | 30.00th=[ 1116], 40.00th=[ 1234], 50.00th=[ 1284], 60.00th=[ 1485], 00:15:35.604 | 70.00th=[ 1569], 80.00th=[ 1670], 90.00th=[ 1754], 95.00th=[ 1804], 00:15:35.604 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 5470], 99.95th=[ 5470], 00:15:35.604 | 99.99th=[ 5470] 00:15:35.604 bw ( KiB/s): min=22528, max=182272, per=2.37%, avg=88064.00, stdev=43390.97, samples=10 00:15:35.604 iops : min= 22, max= 178, avg=86.00, stdev=42.37, samples=10 00:15:35.604 lat (msec) : 50=0.36%, 100=0.90%, 250=1.97%, 500=8.42%, 750=3.58% 00:15:35.604 lat (msec) : 1000=10.75%, 2000=72.76%, >=2000=1.25% 00:15:35.604 cpu : usr=0.00%, sys=1.01%, ctx=1218, majf=0, minf=32769 00:15:35.604 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:15:35.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.604 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:35.604 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.604 00:15:35.604 Run status group 0 (all jobs): 00:15:35.604 READ: bw=3625MiB/s (3801MB/s), 2663KiB/s-147MiB/s (2727kB/s-154MB/s), io=37.1GiB (39.8GB), run=10014-10476msec 00:15:35.604 00:15:35.604 Disk stats (read/write): 00:15:35.604 nvme0n1: ios=33187/0, merge=0/0, ticks=4990789/0, in_queue=4990789, util=98.54% 00:15:35.604 nvme1n1: ios=70237/0, merge=0/0, ticks=6338854/0, in_queue=6338854, util=98.60% 00:15:35.604 nvme2n1: ios=54676/0, merge=0/0, ticks=4757077/0, in_queue=4757077, util=98.74% 00:15:35.604 nvme3n1: ios=36178/0, merge=0/0, ticks=6513381/0, in_queue=6513381, util=98.61% 00:15:35.604 nvme4n1: ios=37885/0, merge=0/0, ticks=6164400/0, in_queue=6164400, util=99.01% 00:15:35.604 nvme5n1: ios=69869/0, merge=0/0, ticks=7157536/0, in_queue=7157536, util=99.16% 00:15:35.604 11:54:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:15:35.604 11:54:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:15:35.605 11:54:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:35.605 11:54:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:15:35.605 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:35.605 11:54:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:36.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:36.538 11:54:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:15:37.471 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:15:37.471 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:15:37.471 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:37.471 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:37.471 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:15:37.471 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:15:37.471 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:37.471 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:37.471 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:37.471 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.472 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:37.472 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.472 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:37.472 11:54:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:15:38.404 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:15:38.404 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:15:38.404 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:38.404 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:38.404 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:15:38.404 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:15:38.405 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:38.405 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:38.405 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:38.405 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.405 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:38.405 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.405 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:38.405 11:54:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:15:39.776 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:39.776 11:54:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:15:40.709 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:40.709 rmmod nvme_rdma 00:15:40.709 rmmod nvme_fabrics 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 3226328 ']' 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 3226328 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 3226328 ']' 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 3226328 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3226328 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3226328' 00:15:40.709 killing process with pid 3226328 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 3226328 00:15:40.709 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 3226328 00:15:40.968 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:40.968 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:40.968 00:15:40.968 real 0m30.645s 00:15:40.968 user 1m45.018s 00:15:40.968 sys 0m14.689s 00:15:40.968 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.968 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:40.968 ************************************ 00:15:40.968 END TEST nvmf_srq_overwhelm 00:15:40.968 ************************************ 00:15:40.968 11:54:48 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:15:40.968 11:54:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.968 11:54:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.968 11:54:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.968 ************************************ 00:15:40.968 START TEST nvmf_shutdown 00:15:40.968 ************************************ 00:15:40.968 11:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:15:41.228 * Looking for test storage... 00:15:41.228 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:41.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.228 --rc genhtml_branch_coverage=1 00:15:41.228 --rc genhtml_function_coverage=1 00:15:41.228 --rc genhtml_legend=1 00:15:41.228 --rc geninfo_all_blocks=1 00:15:41.228 --rc geninfo_unexecuted_blocks=1 00:15:41.228 00:15:41.228 ' 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:41.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.228 --rc genhtml_branch_coverage=1 00:15:41.228 --rc genhtml_function_coverage=1 00:15:41.228 --rc genhtml_legend=1 00:15:41.228 --rc geninfo_all_blocks=1 00:15:41.228 --rc geninfo_unexecuted_blocks=1 00:15:41.228 00:15:41.228 ' 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:41.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.228 --rc genhtml_branch_coverage=1 00:15:41.228 --rc genhtml_function_coverage=1 00:15:41.228 --rc genhtml_legend=1 00:15:41.228 --rc geninfo_all_blocks=1 00:15:41.228 --rc geninfo_unexecuted_blocks=1 00:15:41.228 00:15:41.228 ' 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:41.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.228 --rc genhtml_branch_coverage=1 00:15:41.228 --rc genhtml_function_coverage=1 00:15:41.228 --rc genhtml_legend=1 00:15:41.228 --rc geninfo_all_blocks=1 00:15:41.228 --rc geninfo_unexecuted_blocks=1 00:15:41.228 00:15:41.228 ' 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.228 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.229 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:41.229 ************************************ 00:15:41.229 START TEST nvmf_shutdown_tc1 00:15:41.229 ************************************ 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:41.229 11:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:47.796 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:15:47.797 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:15:47.797 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:15:47.797 Found net devices under 0000:da:00.0: mlx_0_0 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:15:47.797 Found net devices under 0000:da:00.1: mlx_0_1 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:47.797 11:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:47.797 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:47.797 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:15:47.797 altname enp218s0f0np0 00:15:47.797 altname ens818f0np0 00:15:47.797 inet 192.168.100.8/24 scope global mlx_0_0 00:15:47.797 valid_lft forever preferred_lft forever 00:15:47.797 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:47.798 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:47.798 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:15:47.798 altname enp218s0f1np1 00:15:47.798 altname ens818f1np1 00:15:47.798 inet 192.168.100.9/24 scope global mlx_0_1 00:15:47.798 valid_lft forever preferred_lft forever 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:47.798 192.168.100.9' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:47.798 192.168.100.9' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:47.798 192.168.100.9' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3233617 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3233617 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3233617 ']' 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:47.798 [2024-12-09 11:54:55.194708] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:15:47.798 [2024-12-09 11:54:55.194748] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.798 [2024-12-09 11:54:55.271250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.798 [2024-12-09 11:54:55.312929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.798 [2024-12-09 11:54:55.312966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.798 [2024-12-09 11:54:55.312972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.798 [2024-12-09 11:54:55.312978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.798 [2024-12-09 11:54:55.312983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.798 [2024-12-09 11:54:55.314601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.798 [2024-12-09 11:54:55.314712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.798 [2024-12-09 11:54:55.314831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.798 [2024-12-09 11:54:55.314831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:47.798 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:47.799 [2024-12-09 11:54:55.479209] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23abc40/0x23b0130) succeed. 00:15:47.799 [2024-12-09 11:54:55.490696] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23ad2d0/0x23f17d0) succeed. 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.799 11:54:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:47.799 Malloc1 00:15:47.799 [2024-12-09 11:54:55.723922] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:47.799 Malloc2 00:15:47.799 Malloc3 00:15:47.799 Malloc4 00:15:48.056 Malloc5 00:15:48.056 Malloc6 00:15:48.056 Malloc7 00:15:48.056 Malloc8 00:15:48.056 Malloc9 00:15:48.056 Malloc10 00:15:48.313 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.313 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:48.313 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3233890 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3233890 /var/tmp/bdevperf.sock 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3233890 ']' 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:48.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.314 { 00:15:48.314 "params": { 00:15:48.314 "name": "Nvme$subsystem", 00:15:48.314 "trtype": "$TEST_TRANSPORT", 00:15:48.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.314 "adrfam": "ipv4", 00:15:48.314 "trsvcid": "$NVMF_PORT", 00:15:48.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.314 "hdgst": ${hdgst:-false}, 00:15:48.314 "ddgst": ${ddgst:-false} 00:15:48.314 }, 00:15:48.314 "method": "bdev_nvme_attach_controller" 00:15:48.314 } 00:15:48.314 EOF 00:15:48.314 )") 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.314 { 00:15:48.314 "params": { 00:15:48.314 "name": "Nvme$subsystem", 00:15:48.314 "trtype": "$TEST_TRANSPORT", 00:15:48.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.314 "adrfam": "ipv4", 00:15:48.314 "trsvcid": "$NVMF_PORT", 00:15:48.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.314 "hdgst": ${hdgst:-false}, 00:15:48.314 "ddgst": ${ddgst:-false} 00:15:48.314 }, 00:15:48.314 "method": "bdev_nvme_attach_controller" 00:15:48.314 } 00:15:48.314 EOF 00:15:48.314 )") 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.314 { 00:15:48.314 "params": { 00:15:48.314 "name": "Nvme$subsystem", 00:15:48.314 "trtype": "$TEST_TRANSPORT", 00:15:48.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.314 "adrfam": "ipv4", 00:15:48.314 "trsvcid": "$NVMF_PORT", 00:15:48.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.314 "hdgst": ${hdgst:-false}, 00:15:48.314 "ddgst": ${ddgst:-false} 00:15:48.314 }, 00:15:48.314 "method": "bdev_nvme_attach_controller" 00:15:48.314 } 00:15:48.314 EOF 00:15:48.314 )") 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.314 { 00:15:48.314 "params": { 00:15:48.314 "name": "Nvme$subsystem", 00:15:48.314 "trtype": "$TEST_TRANSPORT", 00:15:48.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.314 "adrfam": "ipv4", 00:15:48.314 "trsvcid": "$NVMF_PORT", 00:15:48.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.314 "hdgst": ${hdgst:-false}, 00:15:48.314 "ddgst": ${ddgst:-false} 00:15:48.314 }, 00:15:48.314 "method": "bdev_nvme_attach_controller" 00:15:48.314 } 00:15:48.314 EOF 00:15:48.314 )") 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.314 { 00:15:48.314 "params": { 00:15:48.314 "name": "Nvme$subsystem", 00:15:48.314 "trtype": "$TEST_TRANSPORT", 00:15:48.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.314 "adrfam": "ipv4", 00:15:48.314 "trsvcid": "$NVMF_PORT", 00:15:48.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.314 "hdgst": ${hdgst:-false}, 00:15:48.314 "ddgst": ${ddgst:-false} 00:15:48.314 }, 00:15:48.314 "method": "bdev_nvme_attach_controller" 00:15:48.314 } 00:15:48.314 EOF 00:15:48.314 )") 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.314 { 00:15:48.314 "params": { 00:15:48.314 "name": "Nvme$subsystem", 00:15:48.314 "trtype": "$TEST_TRANSPORT", 00:15:48.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.314 "adrfam": "ipv4", 00:15:48.314 "trsvcid": "$NVMF_PORT", 00:15:48.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.314 "hdgst": ${hdgst:-false}, 00:15:48.314 "ddgst": ${ddgst:-false} 00:15:48.314 }, 00:15:48.314 "method": "bdev_nvme_attach_controller" 00:15:48.314 } 00:15:48.314 EOF 00:15:48.314 )") 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.314 { 00:15:48.314 "params": { 00:15:48.314 "name": "Nvme$subsystem", 00:15:48.314 "trtype": "$TEST_TRANSPORT", 00:15:48.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.314 "adrfam": "ipv4", 00:15:48.314 "trsvcid": "$NVMF_PORT", 00:15:48.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.314 "hdgst": ${hdgst:-false}, 00:15:48.314 "ddgst": ${ddgst:-false} 00:15:48.314 }, 00:15:48.314 "method": "bdev_nvme_attach_controller" 00:15:48.314 } 00:15:48.314 EOF 00:15:48.314 )") 00:15:48.314 [2024-12-09 11:54:56.207593] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:15:48.314 [2024-12-09 11:54:56.207641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.314 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.314 { 00:15:48.314 "params": { 00:15:48.314 "name": "Nvme$subsystem", 00:15:48.314 "trtype": "$TEST_TRANSPORT", 00:15:48.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.314 "adrfam": "ipv4", 00:15:48.314 "trsvcid": "$NVMF_PORT", 00:15:48.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.314 "hdgst": ${hdgst:-false}, 00:15:48.314 "ddgst": ${ddgst:-false} 00:15:48.314 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 } 00:15:48.315 EOF 00:15:48.315 )") 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.315 { 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme$subsystem", 00:15:48.315 "trtype": "$TEST_TRANSPORT", 00:15:48.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "$NVMF_PORT", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.315 "hdgst": ${hdgst:-false}, 00:15:48.315 "ddgst": ${ddgst:-false} 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 } 00:15:48.315 EOF 00:15:48.315 )") 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.315 { 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme$subsystem", 00:15:48.315 "trtype": "$TEST_TRANSPORT", 00:15:48.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "$NVMF_PORT", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.315 "hdgst": ${hdgst:-false}, 00:15:48.315 "ddgst": ${ddgst:-false} 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 } 00:15:48.315 EOF 00:15:48.315 )") 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:15:48.315 11:54:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme1", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 },{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme2", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 },{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme3", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 },{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme4", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 },{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme5", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 },{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme6", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 },{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme7", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 },{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme8", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 },{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme9", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 },{ 00:15:48.315 "params": { 00:15:48.315 "name": "Nvme10", 00:15:48.315 "trtype": "rdma", 00:15:48.315 "traddr": "192.168.100.8", 00:15:48.315 "adrfam": "ipv4", 00:15:48.315 "trsvcid": "4420", 00:15:48.315 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:48.315 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:48.315 "hdgst": false, 00:15:48.315 "ddgst": false 00:15:48.315 }, 00:15:48.315 "method": "bdev_nvme_attach_controller" 00:15:48.315 }' 00:15:48.315 [2024-12-09 11:54:56.284919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.315 [2024-12-09 11:54:56.327138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.245 11:54:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.245 11:54:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:15:49.245 11:54:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:49.245 11:54:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.245 11:54:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:49.245 11:54:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.245 11:54:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3233890 00:15:49.245 11:54:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:15:49.245 11:54:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:15:50.175 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3233890 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3233617 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.175 { 00:15:50.175 "params": { 00:15:50.175 "name": "Nvme$subsystem", 00:15:50.175 "trtype": "$TEST_TRANSPORT", 00:15:50.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.175 "adrfam": "ipv4", 00:15:50.175 "trsvcid": "$NVMF_PORT", 00:15:50.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.175 "hdgst": ${hdgst:-false}, 00:15:50.175 "ddgst": ${ddgst:-false} 00:15:50.175 }, 00:15:50.175 "method": "bdev_nvme_attach_controller" 00:15:50.175 } 00:15:50.175 EOF 00:15:50.175 )") 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.175 { 00:15:50.175 "params": { 00:15:50.175 "name": "Nvme$subsystem", 00:15:50.175 "trtype": "$TEST_TRANSPORT", 00:15:50.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.175 "adrfam": "ipv4", 00:15:50.175 "trsvcid": "$NVMF_PORT", 00:15:50.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.175 "hdgst": ${hdgst:-false}, 00:15:50.175 "ddgst": ${ddgst:-false} 00:15:50.175 }, 00:15:50.175 "method": "bdev_nvme_attach_controller" 00:15:50.175 } 00:15:50.175 EOF 00:15:50.175 )") 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.175 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.175 { 00:15:50.175 "params": { 00:15:50.175 "name": "Nvme$subsystem", 00:15:50.175 "trtype": "$TEST_TRANSPORT", 00:15:50.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.176 "adrfam": "ipv4", 00:15:50.176 "trsvcid": "$NVMF_PORT", 00:15:50.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.176 "hdgst": ${hdgst:-false}, 00:15:50.176 "ddgst": ${ddgst:-false} 00:15:50.176 }, 00:15:50.176 "method": "bdev_nvme_attach_controller" 00:15:50.176 } 00:15:50.176 EOF 00:15:50.176 )") 00:15:50.176 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.176 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.176 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.176 { 00:15:50.176 "params": { 00:15:50.176 "name": "Nvme$subsystem", 00:15:50.176 "trtype": "$TEST_TRANSPORT", 00:15:50.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.176 "adrfam": "ipv4", 00:15:50.176 "trsvcid": "$NVMF_PORT", 00:15:50.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.176 "hdgst": ${hdgst:-false}, 00:15:50.176 "ddgst": ${ddgst:-false} 00:15:50.176 }, 00:15:50.176 "method": "bdev_nvme_attach_controller" 00:15:50.176 } 00:15:50.176 EOF 00:15:50.176 )") 00:15:50.176 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.176 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.176 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.176 { 00:15:50.176 "params": { 00:15:50.176 "name": "Nvme$subsystem", 00:15:50.176 "trtype": "$TEST_TRANSPORT", 00:15:50.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.176 "adrfam": "ipv4", 00:15:50.176 "trsvcid": "$NVMF_PORT", 00:15:50.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.176 "hdgst": ${hdgst:-false}, 00:15:50.176 "ddgst": ${ddgst:-false} 00:15:50.176 }, 00:15:50.176 "method": "bdev_nvme_attach_controller" 00:15:50.176 } 00:15:50.176 EOF 00:15:50.176 )") 00:15:50.176 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.176 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.176 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.176 { 00:15:50.176 "params": { 00:15:50.176 "name": "Nvme$subsystem", 00:15:50.176 "trtype": "$TEST_TRANSPORT", 00:15:50.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.176 "adrfam": "ipv4", 00:15:50.176 "trsvcid": "$NVMF_PORT", 00:15:50.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.176 "hdgst": ${hdgst:-false}, 00:15:50.176 "ddgst": ${ddgst:-false} 00:15:50.176 }, 00:15:50.176 "method": "bdev_nvme_attach_controller" 00:15:50.176 } 00:15:50.176 EOF 00:15:50.176 )") 00:15:50.433 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.433 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.433 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.433 { 00:15:50.433 "params": { 00:15:50.433 "name": "Nvme$subsystem", 00:15:50.433 "trtype": "$TEST_TRANSPORT", 00:15:50.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.433 "adrfam": "ipv4", 00:15:50.433 "trsvcid": "$NVMF_PORT", 00:15:50.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.433 "hdgst": ${hdgst:-false}, 00:15:50.433 "ddgst": ${ddgst:-false} 00:15:50.433 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 } 00:15:50.434 EOF 00:15:50.434 )") 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.434 [2024-12-09 11:54:58.236056] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:15:50.434 [2024-12-09 11:54:58.236106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234160 ] 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.434 { 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme$subsystem", 00:15:50.434 "trtype": "$TEST_TRANSPORT", 00:15:50.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "$NVMF_PORT", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.434 "hdgst": ${hdgst:-false}, 00:15:50.434 "ddgst": ${ddgst:-false} 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 } 00:15:50.434 EOF 00:15:50.434 )") 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.434 { 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme$subsystem", 00:15:50.434 "trtype": "$TEST_TRANSPORT", 00:15:50.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "$NVMF_PORT", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.434 "hdgst": ${hdgst:-false}, 00:15:50.434 "ddgst": ${ddgst:-false} 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 } 00:15:50.434 EOF 00:15:50.434 )") 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.434 { 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme$subsystem", 00:15:50.434 "trtype": "$TEST_TRANSPORT", 00:15:50.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "$NVMF_PORT", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.434 "hdgst": ${hdgst:-false}, 00:15:50.434 "ddgst": ${ddgst:-false} 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 } 00:15:50.434 EOF 00:15:50.434 )") 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:15:50.434 11:54:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme1", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 },{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme2", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 },{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme3", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 },{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme4", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 },{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme5", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 },{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme6", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 },{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme7", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 },{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme8", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 },{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme9", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 },{ 00:15:50.434 "params": { 00:15:50.434 "name": "Nvme10", 00:15:50.434 "trtype": "rdma", 00:15:50.434 "traddr": "192.168.100.8", 00:15:50.434 "adrfam": "ipv4", 00:15:50.434 "trsvcid": "4420", 00:15:50.434 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:50.434 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:50.434 "hdgst": false, 00:15:50.434 "ddgst": false 00:15:50.434 }, 00:15:50.434 "method": "bdev_nvme_attach_controller" 00:15:50.434 }' 00:15:50.434 [2024-12-09 11:54:58.314889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.434 [2024-12-09 11:54:58.355893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.365 Running I/O for 1 seconds... 00:15:52.735 3194.00 IOPS, 199.62 MiB/s 00:15:52.735 Latency(us) 00:15:52.735 [2024-12-09T10:55:00.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.735 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.735 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme1n1 : 1.18 343.09 21.44 0.00 0.00 179771.95 9237.46 248662.31 00:15:52.736 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.736 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme2n1 : 1.18 352.90 22.06 0.00 0.00 171353.01 10298.51 172765.38 00:15:52.736 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.736 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme3n1 : 1.18 360.99 22.56 0.00 0.00 165185.77 6459.98 160781.65 00:15:52.736 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.736 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme4n1 : 1.18 352.13 22.01 0.00 0.00 166943.21 27837.20 157785.72 00:15:52.736 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.736 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme5n1 : 1.18 338.32 21.14 0.00 0.00 168893.76 34952.53 146800.64 00:15:52.736 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.736 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme6n1 : 1.19 377.01 23.56 0.00 0.00 155498.75 4088.20 139810.13 00:15:52.736 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.736 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme7n1 : 1.19 376.61 23.54 0.00 0.00 153415.92 4337.86 132819.63 00:15:52.736 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.736 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme8n1 : 1.19 376.21 23.51 0.00 0.00 151374.16 4618.73 125329.80 00:15:52.736 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.736 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme9n1 : 1.19 375.71 23.48 0.00 0.00 149510.97 5180.46 114344.72 00:15:52.736 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:52.736 Verification LBA range: start 0x0 length 0x400 00:15:52.736 Nvme10n1 : 1.19 375.11 23.44 0.00 0.00 147582.08 6210.32 104857.60 00:15:52.736 [2024-12-09T10:55:00.789Z] =================================================================================================================== 00:15:52.736 [2024-12-09T10:55:00.789Z] Total : 3628.07 226.75 0.00 0.00 160528.32 4088.20 248662.31 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:52.736 rmmod nvme_rdma 00:15:52.736 rmmod nvme_fabrics 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3233617 ']' 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3233617 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3233617 ']' 00:15:52.736 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3233617 00:15:52.993 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:15:52.993 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.993 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3233617 00:15:52.993 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:52.994 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:52.994 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3233617' 00:15:52.994 killing process with pid 3233617 00:15:52.994 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3233617 00:15:52.994 11:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3233617 00:15:53.252 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:53.252 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:53.252 00:15:53.252 real 0m12.093s 00:15:53.252 user 0m28.332s 00:15:53.252 sys 0m5.444s 00:15:53.252 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.252 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:53.252 ************************************ 00:15:53.252 END TEST nvmf_shutdown_tc1 00:15:53.252 ************************************ 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:53.510 ************************************ 00:15:53.510 START TEST nvmf_shutdown_tc2 00:15:53.510 ************************************ 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:53.510 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:15:53.511 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:15:53.511 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:15:53.511 Found net devices under 0000:da:00.0: mlx_0_0 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:15:53.511 Found net devices under 0000:da:00.1: mlx_0_1 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:53.511 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:53.512 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:53.512 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:15:53.512 altname enp218s0f0np0 00:15:53.512 altname ens818f0np0 00:15:53.512 inet 192.168.100.8/24 scope global mlx_0_0 00:15:53.512 valid_lft forever preferred_lft forever 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:53.512 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:53.512 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:15:53.512 altname enp218s0f1np1 00:15:53.512 altname ens818f1np1 00:15:53.512 inet 192.168.100.9/24 scope global mlx_0_1 00:15:53.512 valid_lft forever preferred_lft forever 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:53.512 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:53.512 192.168.100.9' 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:53.770 192.168.100.9' 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:53.770 192.168.100.9' 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3234863 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3234863 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3234863 ']' 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.770 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:53.770 [2024-12-09 11:55:01.653269] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:15:53.770 [2024-12-09 11:55:01.653310] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.770 [2024-12-09 11:55:01.730172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.770 [2024-12-09 11:55:01.772672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.770 [2024-12-09 11:55:01.772707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.770 [2024-12-09 11:55:01.772714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.770 [2024-12-09 11:55:01.772720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.770 [2024-12-09 11:55:01.772725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.770 [2024-12-09 11:55:01.774345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.770 [2024-12-09 11:55:01.774455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.770 [2024-12-09 11:55:01.774561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.770 [2024-12-09 11:55:01.774562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:54.027 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.027 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:15:54.027 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:54.027 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:54.027 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:54.028 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.028 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:54.028 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.028 11:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:54.028 [2024-12-09 11:55:01.934178] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb81c40/0xb86130) succeed. 00:15:54.028 [2024-12-09 11:55:01.945662] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb832d0/0xbc77d0) succeed. 00:15:54.028 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.028 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:54.028 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:54.028 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:54.028 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.285 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:54.285 Malloc1 00:15:54.285 [2024-12-09 11:55:02.168934] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:54.285 Malloc2 00:15:54.285 Malloc3 00:15:54.285 Malloc4 00:15:54.285 Malloc5 00:15:54.542 Malloc6 00:15:54.542 Malloc7 00:15:54.542 Malloc8 00:15:54.542 Malloc9 00:15:54.542 Malloc10 00:15:54.542 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.542 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:54.542 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:54.542 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:54.800 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3235003 00:15:54.800 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3235003 /var/tmp/bdevperf.sock 00:15:54.800 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3235003 ']' 00:15:54.800 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:54.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.801 "hdgst": ${hdgst:-false}, 00:15:54.801 "ddgst": ${ddgst:-false} 00:15:54.801 }, 00:15:54.801 "method": "bdev_nvme_attach_controller" 00:15:54.801 } 00:15:54.801 EOF 00:15:54.801 )") 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.801 "hdgst": ${hdgst:-false}, 00:15:54.801 "ddgst": ${ddgst:-false} 00:15:54.801 }, 00:15:54.801 "method": "bdev_nvme_attach_controller" 00:15:54.801 } 00:15:54.801 EOF 00:15:54.801 )") 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.801 "hdgst": ${hdgst:-false}, 00:15:54.801 "ddgst": ${ddgst:-false} 00:15:54.801 }, 00:15:54.801 "method": "bdev_nvme_attach_controller" 00:15:54.801 } 00:15:54.801 EOF 00:15:54.801 )") 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.801 "hdgst": ${hdgst:-false}, 00:15:54.801 "ddgst": ${ddgst:-false} 00:15:54.801 }, 00:15:54.801 "method": "bdev_nvme_attach_controller" 00:15:54.801 } 00:15:54.801 EOF 00:15:54.801 )") 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.801 "hdgst": ${hdgst:-false}, 00:15:54.801 "ddgst": ${ddgst:-false} 00:15:54.801 }, 00:15:54.801 "method": "bdev_nvme_attach_controller" 00:15:54.801 } 00:15:54.801 EOF 00:15:54.801 )") 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.801 "hdgst": ${hdgst:-false}, 00:15:54.801 "ddgst": ${ddgst:-false} 00:15:54.801 }, 00:15:54.801 "method": "bdev_nvme_attach_controller" 00:15:54.801 } 00:15:54.801 EOF 00:15:54.801 )") 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.801 "hdgst": ${hdgst:-false}, 00:15:54.801 "ddgst": ${ddgst:-false} 00:15:54.801 }, 00:15:54.801 "method": "bdev_nvme_attach_controller" 00:15:54.801 } 00:15:54.801 EOF 00:15:54.801 )") 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.801 [2024-12-09 11:55:02.643024] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:15:54.801 [2024-12-09 11:55:02.643073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235003 ] 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.801 "hdgst": ${hdgst:-false}, 00:15:54.801 "ddgst": ${ddgst:-false} 00:15:54.801 }, 00:15:54.801 "method": "bdev_nvme_attach_controller" 00:15:54.801 } 00:15:54.801 EOF 00:15:54.801 )") 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.801 "hdgst": ${hdgst:-false}, 00:15:54.801 "ddgst": ${ddgst:-false} 00:15:54.801 }, 00:15:54.801 "method": "bdev_nvme_attach_controller" 00:15:54.801 } 00:15:54.801 EOF 00:15:54.801 )") 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:54.801 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:54.801 { 00:15:54.801 "params": { 00:15:54.801 "name": "Nvme$subsystem", 00:15:54.801 "trtype": "$TEST_TRANSPORT", 00:15:54.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.801 "adrfam": "ipv4", 00:15:54.801 "trsvcid": "$NVMF_PORT", 00:15:54.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.802 "hdgst": ${hdgst:-false}, 00:15:54.802 "ddgst": ${ddgst:-false} 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 } 00:15:54.802 EOF 00:15:54.802 )") 00:15:54.802 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:54.802 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:15:54.802 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:15:54.802 11:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme1", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 },{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme2", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 },{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme3", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 },{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme4", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 },{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme5", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 },{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme6", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 },{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme7", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 },{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme8", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 },{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme9", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 },{ 00:15:54.802 "params": { 00:15:54.802 "name": "Nvme10", 00:15:54.802 "trtype": "rdma", 00:15:54.802 "traddr": "192.168.100.8", 00:15:54.802 "adrfam": "ipv4", 00:15:54.802 "trsvcid": "4420", 00:15:54.802 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:54.802 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:54.802 "hdgst": false, 00:15:54.802 "ddgst": false 00:15:54.802 }, 00:15:54.802 "method": "bdev_nvme_attach_controller" 00:15:54.802 }' 00:15:54.802 [2024-12-09 11:55:02.721658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.802 [2024-12-09 11:55:02.762582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.731 Running I/O for 10 seconds... 00:15:55.731 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.731 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:15:55.731 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:55.731 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.731 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=4 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 4 -ge 100 ']' 00:15:55.988 11:55:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:15:56.245 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:15:56.245 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:56.245 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:56.245 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:56.245 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.245 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=152 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 152 -ge 100 ']' 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3235003 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3235003 ']' 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3235003 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3235003 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3235003' 00:15:56.503 killing process with pid 3235003 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3235003 00:15:56.503 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3235003 00:15:56.503 Received shutdown signal, test time was about 0.826131 seconds 00:15:56.503 00:15:56.503 Latency(us) 00:15:56.503 [2024-12-09T10:55:04.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.503 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme1n1 : 0.81 341.15 21.32 0.00 0.00 183339.41 5710.99 205720.62 00:15:56.503 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme2n1 : 0.81 334.60 20.91 0.00 0.00 182931.72 8238.81 190740.97 00:15:56.503 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme3n1 : 0.81 352.52 22.03 0.00 0.00 170543.33 8363.64 183750.46 00:15:56.503 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme4n1 : 0.82 392.42 24.53 0.00 0.00 150122.45 5180.46 131820.98 00:15:56.503 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme5n1 : 0.82 372.16 23.26 0.00 0.00 155149.29 8925.38 161780.30 00:15:56.503 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme6n1 : 0.82 390.99 24.44 0.00 0.00 144846.07 9674.36 121335.22 00:15:56.503 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme7n1 : 0.82 390.38 24.40 0.00 0.00 141479.25 10111.27 117340.65 00:15:56.503 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme8n1 : 0.82 389.49 24.34 0.00 0.00 139649.41 10985.08 109351.50 00:15:56.503 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme9n1 : 0.82 388.61 24.29 0.00 0.00 136982.33 12108.56 94371.84 00:15:56.503 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.503 Verification LBA range: start 0x0 length 0x400 00:15:56.503 Nvme10n1 : 0.83 310.18 19.39 0.00 0.00 167583.94 9362.29 208716.56 00:15:56.503 [2024-12-09T10:55:04.556Z] =================================================================================================================== 00:15:56.503 [2024-12-09T10:55:04.556Z] Total : 3662.49 228.91 0.00 0.00 156121.73 5180.46 208716.56 00:15:56.761 11:55:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:15:57.690 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3234863 00:15:57.690 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:15:57.947 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:57.947 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:57.947 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:57.947 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:57.947 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:57.947 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:15:57.947 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:57.948 rmmod nvme_rdma 00:15:57.948 rmmod nvme_fabrics 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3234863 ']' 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3234863 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3234863 ']' 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3234863 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3234863 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3234863' 00:15:57.948 killing process with pid 3234863 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3234863 00:15:57.948 11:55:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3234863 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:58.515 00:15:58.515 real 0m4.922s 00:15:58.515 user 0m19.967s 00:15:58.515 sys 0m1.012s 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:58.515 ************************************ 00:15:58.515 END TEST nvmf_shutdown_tc2 00:15:58.515 ************************************ 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:58.515 ************************************ 00:15:58.515 START TEST nvmf_shutdown_tc3 00:15:58.515 ************************************ 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:58.515 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:15:58.516 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:15:58.516 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:15:58.516 Found net devices under 0000:da:00.0: mlx_0_0 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:15:58.516 Found net devices under 0000:da:00.1: mlx_0_1 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:58.516 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:58.517 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.517 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:15:58.517 altname enp218s0f0np0 00:15:58.517 altname ens818f0np0 00:15:58.517 inet 192.168.100.8/24 scope global mlx_0_0 00:15:58.517 valid_lft forever preferred_lft forever 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:58.517 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.517 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:15:58.517 altname enp218s0f1np1 00:15:58.517 altname ens818f1np1 00:15:58.517 inet 192.168.100.9/24 scope global mlx_0_1 00:15:58.517 valid_lft forever preferred_lft forever 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:58.517 192.168.100.9' 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:58.517 192.168.100.9' 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:58.517 192.168.100.9' 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:15:58.517 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3235812 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3235812 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3235812 ']' 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.776 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:58.776 [2024-12-09 11:55:06.647311] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:15:58.776 [2024-12-09 11:55:06.647352] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.776 [2024-12-09 11:55:06.723690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.776 [2024-12-09 11:55:06.765246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.776 [2024-12-09 11:55:06.765283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.776 [2024-12-09 11:55:06.765291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.776 [2024-12-09 11:55:06.765297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.776 [2024-12-09 11:55:06.765302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.776 [2024-12-09 11:55:06.766767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.776 [2024-12-09 11:55:06.766881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.776 [2024-12-09 11:55:06.766988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.777 [2024-12-09 11:55:06.766989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:59.035 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.035 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:15:59.035 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:59.035 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:59.035 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:59.035 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.035 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:59.035 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.035 11:55:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:59.035 [2024-12-09 11:55:06.926491] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1eccc40/0x1ed1130) succeed. 00:15:59.035 [2024-12-09 11:55:06.937848] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ece2d0/0x1f127d0) succeed. 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.035 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.292 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:59.292 Malloc1 00:15:59.292 [2024-12-09 11:55:07.160838] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:59.292 Malloc2 00:15:59.292 Malloc3 00:15:59.292 Malloc4 00:15:59.292 Malloc5 00:15:59.549 Malloc6 00:15:59.549 Malloc7 00:15:59.549 Malloc8 00:15:59.549 Malloc9 00:15:59.549 Malloc10 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3236084 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3236084 /var/tmp/bdevperf.sock 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3236084 ']' 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.549 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:15:59.550 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.550 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:15:59.550 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.806 { 00:15:59.806 "params": { 00:15:59.806 "name": "Nvme$subsystem", 00:15:59.806 "trtype": "$TEST_TRANSPORT", 00:15:59.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.806 "adrfam": "ipv4", 00:15:59.806 "trsvcid": "$NVMF_PORT", 00:15:59.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.806 "hdgst": ${hdgst:-false}, 00:15:59.806 "ddgst": ${ddgst:-false} 00:15:59.806 }, 00:15:59.806 "method": "bdev_nvme_attach_controller" 00:15:59.806 } 00:15:59.806 EOF 00:15:59.806 )") 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.806 { 00:15:59.806 "params": { 00:15:59.806 "name": "Nvme$subsystem", 00:15:59.806 "trtype": "$TEST_TRANSPORT", 00:15:59.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.806 "adrfam": "ipv4", 00:15:59.806 "trsvcid": "$NVMF_PORT", 00:15:59.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.806 "hdgst": ${hdgst:-false}, 00:15:59.806 "ddgst": ${ddgst:-false} 00:15:59.806 }, 00:15:59.806 "method": "bdev_nvme_attach_controller" 00:15:59.806 } 00:15:59.806 EOF 00:15:59.806 )") 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.806 { 00:15:59.806 "params": { 00:15:59.806 "name": "Nvme$subsystem", 00:15:59.806 "trtype": "$TEST_TRANSPORT", 00:15:59.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.806 "adrfam": "ipv4", 00:15:59.806 "trsvcid": "$NVMF_PORT", 00:15:59.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.806 "hdgst": ${hdgst:-false}, 00:15:59.806 "ddgst": ${ddgst:-false} 00:15:59.806 }, 00:15:59.806 "method": "bdev_nvme_attach_controller" 00:15:59.806 } 00:15:59.806 EOF 00:15:59.806 )") 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.806 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.807 { 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme$subsystem", 00:15:59.807 "trtype": "$TEST_TRANSPORT", 00:15:59.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "$NVMF_PORT", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.807 "hdgst": ${hdgst:-false}, 00:15:59.807 "ddgst": ${ddgst:-false} 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 } 00:15:59.807 EOF 00:15:59.807 )") 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.807 { 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme$subsystem", 00:15:59.807 "trtype": "$TEST_TRANSPORT", 00:15:59.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "$NVMF_PORT", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.807 "hdgst": ${hdgst:-false}, 00:15:59.807 "ddgst": ${ddgst:-false} 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 } 00:15:59.807 EOF 00:15:59.807 )") 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.807 { 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme$subsystem", 00:15:59.807 "trtype": "$TEST_TRANSPORT", 00:15:59.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "$NVMF_PORT", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.807 "hdgst": ${hdgst:-false}, 00:15:59.807 "ddgst": ${ddgst:-false} 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 } 00:15:59.807 EOF 00:15:59.807 )") 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.807 { 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme$subsystem", 00:15:59.807 "trtype": "$TEST_TRANSPORT", 00:15:59.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "$NVMF_PORT", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.807 "hdgst": ${hdgst:-false}, 00:15:59.807 "ddgst": ${ddgst:-false} 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 } 00:15:59.807 EOF 00:15:59.807 )") 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.807 [2024-12-09 11:55:07.644654] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:15:59.807 [2024-12-09 11:55:07.644702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3236084 ] 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.807 { 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme$subsystem", 00:15:59.807 "trtype": "$TEST_TRANSPORT", 00:15:59.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "$NVMF_PORT", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.807 "hdgst": ${hdgst:-false}, 00:15:59.807 "ddgst": ${ddgst:-false} 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 } 00:15:59.807 EOF 00:15:59.807 )") 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.807 { 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme$subsystem", 00:15:59.807 "trtype": "$TEST_TRANSPORT", 00:15:59.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "$NVMF_PORT", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.807 "hdgst": ${hdgst:-false}, 00:15:59.807 "ddgst": ${ddgst:-false} 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 } 00:15:59.807 EOF 00:15:59.807 )") 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:59.807 { 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme$subsystem", 00:15:59.807 "trtype": "$TEST_TRANSPORT", 00:15:59.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "$NVMF_PORT", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.807 "hdgst": ${hdgst:-false}, 00:15:59.807 "ddgst": ${ddgst:-false} 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 } 00:15:59.807 EOF 00:15:59.807 )") 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:15:59.807 11:55:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme1", 00:15:59.807 "trtype": "rdma", 00:15:59.807 "traddr": "192.168.100.8", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "4420", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.807 "hdgst": false, 00:15:59.807 "ddgst": false 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 },{ 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme2", 00:15:59.807 "trtype": "rdma", 00:15:59.807 "traddr": "192.168.100.8", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "4420", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:59.807 "hdgst": false, 00:15:59.807 "ddgst": false 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 },{ 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme3", 00:15:59.807 "trtype": "rdma", 00:15:59.807 "traddr": "192.168.100.8", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "4420", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:59.807 "hdgst": false, 00:15:59.807 "ddgst": false 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 },{ 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme4", 00:15:59.807 "trtype": "rdma", 00:15:59.807 "traddr": "192.168.100.8", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "4420", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:59.807 "hdgst": false, 00:15:59.807 "ddgst": false 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 },{ 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme5", 00:15:59.807 "trtype": "rdma", 00:15:59.807 "traddr": "192.168.100.8", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "4420", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:59.807 "hdgst": false, 00:15:59.807 "ddgst": false 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 },{ 00:15:59.807 "params": { 00:15:59.807 "name": "Nvme6", 00:15:59.807 "trtype": "rdma", 00:15:59.807 "traddr": "192.168.100.8", 00:15:59.807 "adrfam": "ipv4", 00:15:59.807 "trsvcid": "4420", 00:15:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:59.807 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:59.807 "hdgst": false, 00:15:59.807 "ddgst": false 00:15:59.807 }, 00:15:59.807 "method": "bdev_nvme_attach_controller" 00:15:59.807 },{ 00:15:59.808 "params": { 00:15:59.808 "name": "Nvme7", 00:15:59.808 "trtype": "rdma", 00:15:59.808 "traddr": "192.168.100.8", 00:15:59.808 "adrfam": "ipv4", 00:15:59.808 "trsvcid": "4420", 00:15:59.808 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:59.808 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:59.808 "hdgst": false, 00:15:59.808 "ddgst": false 00:15:59.808 }, 00:15:59.808 "method": "bdev_nvme_attach_controller" 00:15:59.808 },{ 00:15:59.808 "params": { 00:15:59.808 "name": "Nvme8", 00:15:59.808 "trtype": "rdma", 00:15:59.808 "traddr": "192.168.100.8", 00:15:59.808 "adrfam": "ipv4", 00:15:59.808 "trsvcid": "4420", 00:15:59.808 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:59.808 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:59.808 "hdgst": false, 00:15:59.808 "ddgst": false 00:15:59.808 }, 00:15:59.808 "method": "bdev_nvme_attach_controller" 00:15:59.808 },{ 00:15:59.808 "params": { 00:15:59.808 "name": "Nvme9", 00:15:59.808 "trtype": "rdma", 00:15:59.808 "traddr": "192.168.100.8", 00:15:59.808 "adrfam": "ipv4", 00:15:59.808 "trsvcid": "4420", 00:15:59.808 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:59.808 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:59.808 "hdgst": false, 00:15:59.808 "ddgst": false 00:15:59.808 }, 00:15:59.808 "method": "bdev_nvme_attach_controller" 00:15:59.808 },{ 00:15:59.808 "params": { 00:15:59.808 "name": "Nvme10", 00:15:59.808 "trtype": "rdma", 00:15:59.808 "traddr": "192.168.100.8", 00:15:59.808 "adrfam": "ipv4", 00:15:59.808 "trsvcid": "4420", 00:15:59.808 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:59.808 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:59.808 "hdgst": false, 00:15:59.808 "ddgst": false 00:15:59.808 }, 00:15:59.808 "method": "bdev_nvme_attach_controller" 00:15:59.808 }' 00:15:59.808 [2024-12-09 11:55:07.724372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.808 [2024-12-09 11:55:07.765254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.739 Running I/O for 10 seconds... 00:16:00.739 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.739 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:16:00.739 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:00.739 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.739 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:16:00.997 11:55:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:16:01.254 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:16:01.254 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:01.254 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:01.254 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:01.254 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.254 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=154 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 154 -ge 100 ']' 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3235812 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3235812 ']' 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3235812 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3235812 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3235812' 00:16:01.512 killing process with pid 3235812 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3235812 00:16:01.512 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3235812 00:16:02.028 2470.00 IOPS, 154.38 MiB/s [2024-12-09T10:55:10.081Z] 11:55:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:16:02.599 [2024-12-09 11:55:10.424028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.424062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:cff200 sqhd:f800 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.424073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.424079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:cff200 sqhd:f800 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.424087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.424093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:cff200 sqhd:f800 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.424099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.424105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:cff200 sqhd:f800 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.426481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.599 [2024-12-09 11:55:10.426523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:16:02.599 [2024-12-09 11:55:10.426576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.426593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.426610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.426627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.426644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.426659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.426676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.426692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.428614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.599 [2024-12-09 11:55:10.428650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:16:02.599 [2024-12-09 11:55:10.428696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.428721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.428756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.428772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.428789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.428823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.428840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.428857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.431263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.599 [2024-12-09 11:55:10.431298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:16:02.599 [2024-12-09 11:55:10.431341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.431365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.431390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.431411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.431434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.431457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.431480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.431501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.434215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.599 [2024-12-09 11:55:10.434249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:02.599 [2024-12-09 11:55:10.434287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.434311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.434334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.434355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.434378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.434399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.434422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.434442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.436829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.599 [2024-12-09 11:55:10.436864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:16:02.599 [2024-12-09 11:55:10.436909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.436934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.436957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.599 [2024-12-09 11:55:10.436978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.599 [2024-12-09 11:55:10.437001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.437022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.437045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.437066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.439597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.600 [2024-12-09 11:55:10.439631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:16:02.600 [2024-12-09 11:55:10.439673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.439696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.439720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.439743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.439765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.439787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.439821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.439843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.442075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.600 [2024-12-09 11:55:10.442108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:16:02.600 [2024-12-09 11:55:10.442145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.442168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.442192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.442213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.442236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.442257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.442287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.442309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.444696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.600 [2024-12-09 11:55:10.444729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:16:02.600 [2024-12-09 11:55:10.444769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.444794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.444829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.444851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.444874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.444896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.444918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.444939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.447311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.600 [2024-12-09 11:55:10.447348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:16:02.600 [2024-12-09 11:55:10.447393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.447417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.447440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.447462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.447485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.447506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.447530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.600 [2024-12-09 11:55:10.447550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32547 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.449701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:02.600 [2024-12-09 11:55:10.449734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:16:02.600 [2024-12-09 11:55:10.451992] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:16:02.600 [2024-12-09 11:55:10.454599] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:16:02.600 [2024-12-09 11:55:10.457161] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:16:02.600 [2024-12-09 11:55:10.459642] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:16:02.600 [2024-12-09 11:55:10.462017] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:16:02.600 [2024-12-09 11:55:10.464365] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:16:02.600 [2024-12-09 11:55:10.466579] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:16:02.600 [2024-12-09 11:55:10.468861] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:16:02.600 [2024-12-09 11:55:10.469049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf780 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf700 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf680 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf600 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f580 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8f500 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7f480 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6f400 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5f380 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4f300 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3f280 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2f200 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.600 [2024-12-09 11:55:10.469771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1f180 len:0x10000 key:0x184d00 00:16:02.600 [2024-12-09 11:55:10.469794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.469880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0f100 len:0x10000 key:0x184d00 00:16:02.601 [2024-12-09 11:55:10.469906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.469940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002df0000 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.469963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.469998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff80 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcff00 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfe80 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafe00 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fd80 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fd00 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fc80 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6fc00 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5fb80 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4fb00 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3fa80 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2fa00 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f980 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f900 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff880 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef800 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.470975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf780 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.470998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf700 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf680 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf600 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f580 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8f500 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7f480 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6f400 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5f380 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4f300 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3f280 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2f200 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1f180 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0f100 len:0x10000 key:0x183f00 00:16:02.601 [2024-12-09 11:55:10.471749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ff0000 len:0x10000 key:0x184500 00:16:02.601 [2024-12-09 11:55:10.471806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff80 len:0x10000 key:0x184500 00:16:02.601 [2024-12-09 11:55:10.471873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcff00 len:0x10000 key:0x184500 00:16:02.601 [2024-12-09 11:55:10.471930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.471963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfe80 len:0x10000 key:0x184500 00:16:02.601 [2024-12-09 11:55:10.471987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.601 [2024-12-09 11:55:10.472021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafe00 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fd80 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fd00 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fc80 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6fc00 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5fb80 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4fb00 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3fa80 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2fa00 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f980 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f900 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff880 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef800 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf780 len:0x10000 key:0x184500 00:16:02.602 [2024-12-09 11:55:10.472791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.472834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef800 len:0x10000 key:0x184d00 00:16:02.602 [2024-12-09 11:55:10.472857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477157] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:16:02.602 [2024-12-09 11:55:10.477210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d45f000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d43e000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d41d000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3fc000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3db000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3ba000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d399000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d378000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d357000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d336000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d315000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.477962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2f4000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.477983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2d3000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2b2000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d291000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d270000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca93000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca72000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef2f000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef0e000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eeed000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.602 [2024-12-09 11:55:10.478508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eecc000 len:0x10000 key:0x184700 00:16:02.602 [2024-12-09 11:55:10.478529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.478566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eeab000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.478588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.478620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee8a000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.478642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.478675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee69000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.478695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.478729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee48000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.478750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.478783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee27000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.478803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.478845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee06000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.478867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.478899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ede5000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.478921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.478953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000edc4000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.478975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eda3000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed82000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13f000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f11e000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0fd000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0dc000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0bb000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f09a000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f079000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f058000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f037000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f016000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eff5000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000efd4000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000efb3000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef92000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef71000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.479954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef50000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.479975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.480016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f34f000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.480037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.480070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f32e000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.480090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.480123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f30d000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.480145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.480177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f2ec000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.480198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.480236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f2cb000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.480259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.603 [2024-12-09 11:55:10.480292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f2aa000 len:0x10000 key:0x184700 00:16:02.603 [2024-12-09 11:55:10.480313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f289000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f268000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f247000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f226000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f205000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1e4000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1c3000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1a2000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f181000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.480843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f160000 len:0x10000 key:0x184700 00:16:02.604 [2024-12-09 11:55:10.480865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53e9b000 sqhd:7210 p:0 m:0 dnr:0 00:16:02.604 [2024-12-09 11:55:10.508956] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509067] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509080] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509090] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509101] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509110] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509119] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509132] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509141] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509151] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.509160] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:16:02.604 [2024-12-09 11:55:10.515594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:02.604 [2024-12-09 11:55:10.515622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:16:02.604 [2024-12-09 11:55:10.516156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:16:02.604 [2024-12-09 11:55:10.516173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:16:02.604 [2024-12-09 11:55:10.516183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:16:02.604 [2024-12-09 11:55:10.516195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:16:02.604 task offset: 35712 on job bdev=Nvme1n1 fails 00:16:02.604 00:16:02.604 Latency(us) 00:16:02.604 [2024-12-09T10:55:10.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.604 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme1n1 ended in about 1.87 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme1n1 : 1.87 136.99 8.56 34.25 0.00 369951.89 35202.19 1062557.01 00:16:02.604 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme2n1 ended in about 1.87 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme2n1 : 1.87 136.93 8.56 34.23 0.00 366535.83 41693.38 1054567.86 00:16:02.604 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme3n1 ended in about 1.87 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme3n1 : 1.87 136.87 8.55 34.22 0.00 363459.15 45937.62 1054567.86 00:16:02.604 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme4n1 ended in about 1.87 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme4n1 : 1.87 152.85 9.55 34.20 0.00 329520.71 4743.56 1054567.86 00:16:02.604 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme5n1 ended in about 1.87 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme5n1 : 1.87 139.96 8.75 34.19 0.00 350884.49 9112.62 1054567.86 00:16:02.604 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme6n1 ended in about 1.87 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme6n1 : 1.87 145.24 9.08 34.17 0.00 337495.91 12732.71 1054567.86 00:16:02.604 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme7n1 ended in about 1.87 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme7n1 : 1.87 153.72 9.61 34.16 0.00 319299.34 16976.94 1046578.71 00:16:02.604 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme8n1 ended in about 1.87 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme8n1 : 1.87 146.19 9.14 34.15 0.00 329619.67 23967.45 1046578.71 00:16:02.604 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme9n1 ended in about 1.83 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme9n1 : 1.83 139.71 8.73 34.93 0.00 337932.09 58670.32 1070546.16 00:16:02.604 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.604 Job: Nvme10n1 ended in about 1.84 seconds with error 00:16:02.604 Verification LBA range: start 0x0 length 0x400 00:16:02.604 Nvme10n1 : 1.84 34.78 2.17 34.78 0.00 842324.85 62415.24 1070546.16 00:16:02.604 [2024-12-09T10:55:10.657Z] =================================================================================================================== 00:16:02.604 [2024-12-09T10:55:10.657Z] Total : 1323.24 82.70 343.28 0.00 364907.68 4743.56 1070546.16 00:16:02.604 [2024-12-09 11:55:10.544892] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:02.604 [2024-12-09 11:55:10.544918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:16:02.604 [2024-12-09 11:55:10.544934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:16:02.604 [2024-12-09 11:55:10.544943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:16:02.604 [2024-12-09 11:55:10.544955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:16:02.604 [2024-12-09 11:55:10.555164] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.604 [2024-12-09 11:55:10.555218] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.604 [2024-12-09 11:55:10.555240] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:16:02.604 [2024-12-09 11:55:10.555336] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.604 [2024-12-09 11:55:10.555364] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.604 [2024-12-09 11:55:10.555381] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e3200 00:16:02.604 [2024-12-09 11:55:10.561479] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.604 [2024-12-09 11:55:10.561527] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.604 [2024-12-09 11:55:10.561549] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d8d40 00:16:02.604 [2024-12-09 11:55:10.561672] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.604 [2024-12-09 11:55:10.561699] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.604 [2024-12-09 11:55:10.561716] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cc0c0 00:16:02.604 [2024-12-09 11:55:10.561839] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.604 [2024-12-09 11:55:10.561866] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.605 [2024-12-09 11:55:10.561883] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bf380 00:16:02.605 [2024-12-09 11:55:10.562010] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.605 [2024-12-09 11:55:10.562036] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.605 [2024-12-09 11:55:10.562054] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017089e00 00:16:02.605 [2024-12-09 11:55:10.563096] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.605 [2024-12-09 11:55:10.563130] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.605 [2024-12-09 11:55:10.563147] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001707e000 00:16:02.605 [2024-12-09 11:55:10.563248] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.605 [2024-12-09 11:55:10.563273] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.605 [2024-12-09 11:55:10.563289] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017052c40 00:16:02.605 [2024-12-09 11:55:10.563394] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.605 [2024-12-09 11:55:10.563420] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.605 [2024-12-09 11:55:10.563436] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cb580 00:16:02.605 [2024-12-09 11:55:10.563532] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:02.605 [2024-12-09 11:55:10.563557] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:02.605 [2024-12-09 11:55:10.563574] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001708d2c0 00:16:02.863 11:55:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3236084 00:16:02.863 11:55:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:16:02.863 11:55:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3236084 00:16:02.863 11:55:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:16:02.863 11:55:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.863 11:55:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:16:02.863 11:55:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.863 11:55:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3236084 00:16:03.800 [2024-12-09 11:55:11.559343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.559367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.561165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.561178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.561211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.561220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.561228] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.561238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:16:03.800 [2024-12-09 11:55:11.561249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.561255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.561265] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.561273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:16:03.800 [2024-12-09 11:55:11.565485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.565501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.566804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.566820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.567978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.567990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.569430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.569444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.570774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.570788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.572068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.572082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.573166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.573180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.575124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:03.800 [2024-12-09 11:55:11.575139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:16:03.800 [2024-12-09 11:55:11.575147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.575155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.575163] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.575173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:16:03.800 [2024-12-09 11:55:11.575188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.575196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.575203] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.575211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:16:03.800 [2024-12-09 11:55:11.575223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.575234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.575242] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.575250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:16:03.800 [2024-12-09 11:55:11.575260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.575267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.575275] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.575283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:16:03.800 [2024-12-09 11:55:11.575339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.575349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.575357] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.575366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:16:03.800 [2024-12-09 11:55:11.575377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.575385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.575392] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.575401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:16:03.800 [2024-12-09 11:55:11.575411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.575419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.575427] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.575435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:16:03.800 [2024-12-09 11:55:11.575445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:16:03.800 [2024-12-09 11:55:11.575452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:16:03.800 [2024-12-09 11:55:11.575459] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:16:03.800 [2024-12-09 11:55:11.575467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:03.800 rmmod nvme_rdma 00:16:03.800 rmmod nvme_fabrics 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3235812 ']' 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3235812 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3235812 ']' 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3235812 00:16:03.800 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3235812) - No such process 00:16:03.800 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3235812 is not found' 00:16:03.800 Process with pid 3235812 is not found 00:16:03.801 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.801 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:03.801 00:16:03.801 real 0m5.430s 00:16:03.801 user 0m16.054s 00:16:03.801 sys 0m1.161s 00:16:03.801 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.801 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:03.801 ************************************ 00:16:03.801 END TEST nvmf_shutdown_tc3 00:16:03.801 ************************************ 00:16:03.801 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:16:03.801 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:16:03.801 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:03.801 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.801 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:04.061 ************************************ 00:16:04.061 START TEST nvmf_shutdown_tc4 00:16:04.061 ************************************ 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:16:04.061 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:16:04.061 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:04.061 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:16:04.062 Found net devices under 0000:da:00.0: mlx_0_0 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:16:04.062 Found net devices under 0000:da:00.1: mlx_0_1 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:04.062 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:04.062 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:16:04.062 altname enp218s0f0np0 00:16:04.062 altname ens818f0np0 00:16:04.062 inet 192.168.100.8/24 scope global mlx_0_0 00:16:04.062 valid_lft forever preferred_lft forever 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:04.062 11:55:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:04.062 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:04.062 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:16:04.062 altname enp218s0f1np1 00:16:04.062 altname ens818f1np1 00:16:04.062 inet 192.168.100.9/24 scope global mlx_0_1 00:16:04.062 valid_lft forever preferred_lft forever 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:04.062 192.168.100.9' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:04.062 192.168.100.9' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:04.062 192.168.100.9' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3236890 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3236890 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3236890 ']' 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.062 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:04.320 [2024-12-09 11:55:12.145267] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:16:04.320 [2024-12-09 11:55:12.145307] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.320 [2024-12-09 11:55:12.222465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.320 [2024-12-09 11:55:12.264335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.320 [2024-12-09 11:55:12.264373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.320 [2024-12-09 11:55:12.264380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.320 [2024-12-09 11:55:12.264385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.320 [2024-12-09 11:55:12.264390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.320 [2024-12-09 11:55:12.265989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.320 [2024-12-09 11:55:12.266091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.320 [2024-12-09 11:55:12.266208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.320 [2024-12-09 11:55:12.266208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:04.320 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.320 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:16:04.320 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.320 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.320 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:04.577 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.577 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:04.577 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.577 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:04.577 [2024-12-09 11:55:12.435569] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2056c40/0x205b130) succeed. 00:16:04.577 [2024-12-09 11:55:12.447056] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20582d0/0x209c7d0) succeed. 00:16:04.577 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.577 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:04.577 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:04.577 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.577 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.578 11:55:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:04.835 Malloc1 00:16:04.835 [2024-12-09 11:55:12.672022] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:04.835 Malloc2 00:16:04.835 Malloc3 00:16:04.835 Malloc4 00:16:04.835 Malloc5 00:16:04.835 Malloc6 00:16:05.092 Malloc7 00:16:05.092 Malloc8 00:16:05.092 Malloc9 00:16:05.092 Malloc10 00:16:05.092 11:55:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.092 11:55:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:05.092 11:55:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:05.092 11:55:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:05.092 11:55:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3237161 00:16:05.092 11:55:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:16:05.092 11:55:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:16:05.348 [2024-12-09 11:55:13.206107] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3236890 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3236890 ']' 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3236890 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3236890 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3236890' 00:16:10.605 killing process with pid 3236890 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3236890 00:16:10.605 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3236890 00:16:10.605 NVMe io qpair process completion error 00:16:10.605 NVMe io qpair process completion error 00:16:10.605 NVMe io qpair process completion error 00:16:10.605 starting I/O failed: -6 00:16:10.605 starting I/O failed: -6 00:16:10.605 NVMe io qpair process completion error 00:16:10.605 NVMe io qpair process completion error 00:16:10.605 NVMe io qpair process completion error 00:16:10.605 NVMe io qpair process completion error 00:16:10.605 NVMe io qpair process completion error 00:16:10.863 11:55:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 [2024-12-09 11:55:19.277515] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 starting I/O failed: -6 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.432 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 [2024-12-09 11:55:19.289036] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.433 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 [2024-12-09 11:55:19.301372] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.434 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 [2024-12-09 11:55:19.327116] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.435 Write completed with error (sct=0, sc=8) 00:16:11.435 starting I/O failed: -6 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 starting I/O failed: -6 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 starting I/O failed: -6 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 starting I/O failed: -6 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 starting I/O failed: -6 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 [2024-12-09 11:55:19.340014] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.436 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 Write completed with error (sct=0, sc=8) 00:16:11.437 NVMe io qpair process completion error 00:16:11.437 NVMe io qpair process completion error 00:16:11.437 NVMe io qpair process completion error 00:16:11.437 NVMe io qpair process completion error 00:16:12.005 11:55:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3237161 00:16:12.005 11:55:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:16:12.005 11:55:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3237161 00:16:12.005 11:55:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:16:12.005 11:55:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.005 11:55:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:16:12.005 11:55:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.005 11:55:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3237161 00:16:12.574 [2024-12-09 11:55:20.343413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.574 [2024-12-09 11:55:20.343475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:16:12.574 [2024-12-09 11:55:20.345540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.574 [2024-12-09 11:55:20.345578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:16:12.574 [2024-12-09 11:55:20.345606] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:16:12.574 [2024-12-09 11:55:20.347322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.574 [2024-12-09 11:55:20.347356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:16:12.574 [2024-12-09 11:55:20.347389] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:16:12.574 [2024-12-09 11:55:20.349610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.574 [2024-12-09 11:55:20.349642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:16:12.574 Write completed with error (sct=0, sc=8) 00:16:12.574 Write completed with error (sct=0, sc=8) 00:16:12.574 Write completed with error (sct=0, sc=8) 00:16:12.574 Write completed with error (sct=0, sc=8) 00:16:12.574 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 [2024-12-09 11:55:20.352074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.575 [2024-12-09 11:55:20.352108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 [2024-12-09 11:55:20.354507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 [2024-12-09 11:55:20.354540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 [2024-12-09 11:55:20.356977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.575 [2024-12-09 11:55:20.357010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 [2024-12-09 11:55:20.359448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.575 [2024-12-09 11:55:20.359482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 [2024-12-09 11:55:20.362084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.575 [2024-12-09 11:55:20.362126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 [2024-12-09 11:55:20.364600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:12.575 [2024-12-09 11:55:20.364634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.575 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.576 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Write completed with error (sct=0, sc=8) 00:16:12.577 Initializing NVMe Controllers 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:16:12.577 Controller IO queue size 128, less than required. 00:16:12.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:12.577 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:16:12.577 Initialization complete. Launching workers. 00:16:12.577 ======================================================== 00:16:12.578 Latency(us) 00:16:12.578 Device Information : IOPS MiB/s Average min max 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1436.12 61.71 88751.86 148.38 1266435.41 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1430.59 61.47 88509.90 10308.55 1210895.49 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1449.87 62.30 103119.80 120.94 2245417.80 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1457.41 62.62 102715.92 124.30 2246660.75 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1438.30 61.80 88031.87 143.87 1206400.05 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1438.47 61.81 87918.31 130.75 1228221.64 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1412.32 60.69 89897.90 26136.68 1250747.53 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1461.43 62.80 102403.07 121.44 2192084.32 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1403.26 60.30 90556.67 27553.12 1271860.08 00:16:12.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1480.54 63.62 101147.01 131.05 2090256.18 00:16:12.578 ======================================================== 00:16:12.578 Total : 14408.31 619.11 94377.46 120.94 2246660.75 00:16:12.578 00:16:12.578 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:12.578 rmmod nvme_rdma 00:16:12.578 rmmod nvme_fabrics 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3236890 ']' 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3236890 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3236890 ']' 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3236890 00:16:12.578 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3236890) - No such process 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3236890 is not found' 00:16:12.578 Process with pid 3236890 is not found 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:12.578 00:16:12.578 real 0m8.648s 00:16:12.578 user 0m32.253s 00:16:12.578 sys 0m1.101s 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:12.578 ************************************ 00:16:12.578 END TEST nvmf_shutdown_tc4 00:16:12.578 ************************************ 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:16:12.578 00:16:12.578 real 0m31.599s 00:16:12.578 user 1m36.846s 00:16:12.578 sys 0m9.017s 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:12.578 ************************************ 00:16:12.578 END TEST nvmf_shutdown 00:16:12.578 ************************************ 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.578 11:55:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:12.838 ************************************ 00:16:12.838 START TEST nvmf_nsid 00:16:12.838 ************************************ 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:16:12.838 * Looking for test storage... 00:16:12.838 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:12.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.838 --rc genhtml_branch_coverage=1 00:16:12.838 --rc genhtml_function_coverage=1 00:16:12.838 --rc genhtml_legend=1 00:16:12.838 --rc geninfo_all_blocks=1 00:16:12.838 --rc geninfo_unexecuted_blocks=1 00:16:12.838 00:16:12.838 ' 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:12.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.838 --rc genhtml_branch_coverage=1 00:16:12.838 --rc genhtml_function_coverage=1 00:16:12.838 --rc genhtml_legend=1 00:16:12.838 --rc geninfo_all_blocks=1 00:16:12.838 --rc geninfo_unexecuted_blocks=1 00:16:12.838 00:16:12.838 ' 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:12.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.838 --rc genhtml_branch_coverage=1 00:16:12.838 --rc genhtml_function_coverage=1 00:16:12.838 --rc genhtml_legend=1 00:16:12.838 --rc geninfo_all_blocks=1 00:16:12.838 --rc geninfo_unexecuted_blocks=1 00:16:12.838 00:16:12.838 ' 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:12.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.838 --rc genhtml_branch_coverage=1 00:16:12.838 --rc genhtml_function_coverage=1 00:16:12.838 --rc genhtml_legend=1 00:16:12.838 --rc geninfo_all_blocks=1 00:16:12.838 --rc geninfo_unexecuted_blocks=1 00:16:12.838 00:16:12.838 ' 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.838 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:12.839 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:12.839 11:55:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:16:19.408 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:16:19.408 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:16:19.408 Found net devices under 0000:da:00.0: mlx_0_0 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:16:19.408 Found net devices under 0000:da:00.1: mlx_0_1 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:16:19.408 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:19.409 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:19.409 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:16:19.409 altname enp218s0f0np0 00:16:19.409 altname ens818f0np0 00:16:19.409 inet 192.168.100.8/24 scope global mlx_0_0 00:16:19.409 valid_lft forever preferred_lft forever 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:19.409 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:19.409 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:16:19.409 altname enp218s0f1np1 00:16:19.409 altname ens818f1np1 00:16:19.409 inet 192.168.100.9/24 scope global mlx_0_1 00:16:19.409 valid_lft forever preferred_lft forever 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:19.409 192.168.100.9' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:19.409 192.168.100.9' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:19.409 192.168.100.9' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3241404 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3241404 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3241404 ']' 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:19.409 [2024-12-09 11:55:26.731977] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:16:19.409 [2024-12-09 11:55:26.732028] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.409 [2024-12-09 11:55:26.809863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.409 [2024-12-09 11:55:26.853518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.409 [2024-12-09 11:55:26.853549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.409 [2024-12-09 11:55:26.853556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.409 [2024-12-09 11:55:26.853562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.409 [2024-12-09 11:55:26.853567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.409 [2024-12-09 11:55:26.854110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3241510 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:16:19.409 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=579e62a5-f99c-497c-b8b0-b018801fc79d 00:16:19.410 11:55:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=00a9ec07-a3f4-42f2-8a09-ad168a744c2c 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a2a50aee-8169-4162-9b5c-955f1f519728 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:19.410 null0 00:16:19.410 null1 00:16:19.410 [2024-12-09 11:55:27.036720] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:16:19.410 [2024-12-09 11:55:27.036762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241510 ] 00:16:19.410 null2 00:16:19.410 [2024-12-09 11:55:27.062012] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ead090/0x1ebd8f0) succeed. 00:16:19.410 [2024-12-09 11:55:27.071861] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1eae540/0x1f3d980) succeed. 00:16:19.410 [2024-12-09 11:55:27.114243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.410 [2024-12-09 11:55:27.121734] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:19.410 [2024-12-09 11:55:27.157934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3241510 /var/tmp/tgt2.sock 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3241510 ']' 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:16:19.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:19.410 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:16:19.669 [2024-12-09 11:55:27.702381] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xee1ee0/0xd0b130) succeed. 00:16:19.669 [2024-12-09 11:55:27.713266] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xef7820/0xd4c7d0) succeed. 00:16:19.927 [2024-12-09 11:55:27.754908] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:16:19.927 nvme0n1 nvme0n2 00:16:19.927 nvme1n1 00:16:19.927 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:16:19.927 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:16:19.927 11:55:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 579e62a5-f99c-497c-b8b0-b018801fc79d 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=579e62a5f99c497cb8b0b018801fc79d 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 579E62A5F99C497CB8B0B018801FC79D 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 579E62A5F99C497CB8B0B018801FC79D == \5\7\9\E\6\2\A\5\F\9\9\C\4\9\7\C\B\8\B\0\B\0\1\8\8\0\1\F\C\7\9\D ]] 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 00a9ec07-a3f4-42f2-8a09-ad168a744c2c 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=00a9ec07a3f442f28a09ad168a744c2c 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 00A9EC07A3F442F28A09AD168A744C2C 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 00A9EC07A3F442F28A09AD168A744C2C == \0\0\A\9\E\C\0\7\A\3\F\4\4\2\F\2\8\A\0\9\A\D\1\6\8\A\7\4\4\C\2\C ]] 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a2a50aee-8169-4162-9b5c-955f1f519728 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a2a50aee816941629b5c955f1f519728 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A2A50AEE816941629B5C955F1F519728 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A2A50AEE816941629B5C955F1F519728 == \A\2\A\5\0\A\E\E\8\1\6\9\4\1\6\2\9\B\5\C\9\5\5\F\1\F\5\1\9\7\2\8 ]] 00:16:26.489 11:55:33 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3241510 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3241510 ']' 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3241510 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3241510 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3241510' 00:16:33.054 killing process with pid 3241510 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3241510 00:16:33.054 11:55:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3241510 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:33.054 rmmod nvme_rdma 00:16:33.054 rmmod nvme_fabrics 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3241404 ']' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3241404 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3241404 ']' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3241404 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3241404 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3241404' 00:16:33.054 killing process with pid 3241404 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3241404 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3241404 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:33.054 00:16:33.054 real 0m20.006s 00:16:33.054 user 0m28.983s 00:16:33.054 sys 0m5.361s 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:33.054 ************************************ 00:16:33.054 END TEST nvmf_nsid 00:16:33.054 ************************************ 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:33.054 00:16:33.054 real 7m24.628s 00:16:33.054 user 17m49.861s 00:16:33.054 sys 1m55.989s 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.054 11:55:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:33.054 ************************************ 00:16:33.054 END TEST nvmf_target_extra 00:16:33.054 ************************************ 00:16:33.054 11:55:40 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:16:33.054 11:55:40 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:33.054 11:55:40 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.054 11:55:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:33.054 ************************************ 00:16:33.054 START TEST nvmf_host 00:16:33.054 ************************************ 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:16:33.054 * Looking for test storage... 00:16:33.054 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:33.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.054 --rc genhtml_branch_coverage=1 00:16:33.054 --rc genhtml_function_coverage=1 00:16:33.054 --rc genhtml_legend=1 00:16:33.054 --rc geninfo_all_blocks=1 00:16:33.054 --rc geninfo_unexecuted_blocks=1 00:16:33.054 00:16:33.054 ' 00:16:33.054 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:33.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.054 --rc genhtml_branch_coverage=1 00:16:33.054 --rc genhtml_function_coverage=1 00:16:33.054 --rc genhtml_legend=1 00:16:33.054 --rc geninfo_all_blocks=1 00:16:33.054 --rc geninfo_unexecuted_blocks=1 00:16:33.054 00:16:33.055 ' 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:33.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.055 --rc genhtml_branch_coverage=1 00:16:33.055 --rc genhtml_function_coverage=1 00:16:33.055 --rc genhtml_legend=1 00:16:33.055 --rc geninfo_all_blocks=1 00:16:33.055 --rc geninfo_unexecuted_blocks=1 00:16:33.055 00:16:33.055 ' 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:33.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.055 --rc genhtml_branch_coverage=1 00:16:33.055 --rc genhtml_function_coverage=1 00:16:33.055 --rc genhtml_legend=1 00:16:33.055 --rc geninfo_all_blocks=1 00:16:33.055 --rc geninfo_unexecuted_blocks=1 00:16:33.055 00:16:33.055 ' 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.055 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.055 ************************************ 00:16:33.055 START TEST nvmf_multicontroller 00:16:33.055 ************************************ 00:16:33.055 11:55:40 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:16:33.055 * Looking for test storage... 00:16:33.055 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:33.055 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:33.055 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:16:33.055 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:33.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.315 --rc genhtml_branch_coverage=1 00:16:33.315 --rc genhtml_function_coverage=1 00:16:33.315 --rc genhtml_legend=1 00:16:33.315 --rc geninfo_all_blocks=1 00:16:33.315 --rc geninfo_unexecuted_blocks=1 00:16:33.315 00:16:33.315 ' 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:33.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.315 --rc genhtml_branch_coverage=1 00:16:33.315 --rc genhtml_function_coverage=1 00:16:33.315 --rc genhtml_legend=1 00:16:33.315 --rc geninfo_all_blocks=1 00:16:33.315 --rc geninfo_unexecuted_blocks=1 00:16:33.315 00:16:33.315 ' 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:33.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.315 --rc genhtml_branch_coverage=1 00:16:33.315 --rc genhtml_function_coverage=1 00:16:33.315 --rc genhtml_legend=1 00:16:33.315 --rc geninfo_all_blocks=1 00:16:33.315 --rc geninfo_unexecuted_blocks=1 00:16:33.315 00:16:33.315 ' 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:33.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.315 --rc genhtml_branch_coverage=1 00:16:33.315 --rc genhtml_function_coverage=1 00:16:33.315 --rc genhtml_legend=1 00:16:33.315 --rc geninfo_all_blocks=1 00:16:33.315 --rc geninfo_unexecuted_blocks=1 00:16:33.315 00:16:33.315 ' 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.315 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.316 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:16:33.316 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:16:33.316 00:16:33.316 real 0m0.208s 00:16:33.316 user 0m0.126s 00:16:33.316 sys 0m0.096s 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:33.316 ************************************ 00:16:33.316 END TEST nvmf_multicontroller 00:16:33.316 ************************************ 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.316 ************************************ 00:16:33.316 START TEST nvmf_aer 00:16:33.316 ************************************ 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:16:33.316 * Looking for test storage... 00:16:33.316 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:16:33.316 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:33.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.576 --rc genhtml_branch_coverage=1 00:16:33.576 --rc genhtml_function_coverage=1 00:16:33.576 --rc genhtml_legend=1 00:16:33.576 --rc geninfo_all_blocks=1 00:16:33.576 --rc geninfo_unexecuted_blocks=1 00:16:33.576 00:16:33.576 ' 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:33.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.576 --rc genhtml_branch_coverage=1 00:16:33.576 --rc genhtml_function_coverage=1 00:16:33.576 --rc genhtml_legend=1 00:16:33.576 --rc geninfo_all_blocks=1 00:16:33.576 --rc geninfo_unexecuted_blocks=1 00:16:33.576 00:16:33.576 ' 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:33.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.576 --rc genhtml_branch_coverage=1 00:16:33.576 --rc genhtml_function_coverage=1 00:16:33.576 --rc genhtml_legend=1 00:16:33.576 --rc geninfo_all_blocks=1 00:16:33.576 --rc geninfo_unexecuted_blocks=1 00:16:33.576 00:16:33.576 ' 00:16:33.576 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:33.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.576 --rc genhtml_branch_coverage=1 00:16:33.576 --rc genhtml_function_coverage=1 00:16:33.576 --rc genhtml_legend=1 00:16:33.576 --rc geninfo_all_blocks=1 00:16:33.576 --rc geninfo_unexecuted_blocks=1 00:16:33.576 00:16:33.576 ' 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.577 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:16:33.577 11:55:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:16:40.143 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:16:40.143 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:40.143 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:16:40.144 Found net devices under 0000:da:00.0: mlx_0_0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:16:40.144 Found net devices under 0000:da:00.1: mlx_0_1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:40.144 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:40.144 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:16:40.144 altname enp218s0f0np0 00:16:40.144 altname ens818f0np0 00:16:40.144 inet 192.168.100.8/24 scope global mlx_0_0 00:16:40.144 valid_lft forever preferred_lft forever 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:40.144 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:40.144 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:16:40.144 altname enp218s0f1np1 00:16:40.144 altname ens818f1np1 00:16:40.144 inet 192.168.100.9/24 scope global mlx_0_1 00:16:40.144 valid_lft forever preferred_lft forever 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:40.144 192.168.100.9' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:40.144 192.168.100.9' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:40.144 192.168.100.9' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.144 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3247206 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3247206 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3247206 ']' 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 [2024-12-09 11:55:47.353095] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:16:40.145 [2024-12-09 11:55:47.353140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.145 [2024-12-09 11:55:47.429137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.145 [2024-12-09 11:55:47.471845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.145 [2024-12-09 11:55:47.471881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.145 [2024-12-09 11:55:47.471888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.145 [2024-12-09 11:55:47.471895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.145 [2024-12-09 11:55:47.471900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.145 [2024-12-09 11:55:47.473361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.145 [2024-12-09 11:55:47.473470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.145 [2024-12-09 11:55:47.473578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.145 [2024-12-09 11:55:47.473579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 [2024-12-09 11:55:47.636967] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13dd940/0x13e1e30) succeed. 00:16:40.145 [2024-12-09 11:55:47.648381] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13defd0/0x14234d0) succeed. 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 Malloc0 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 [2024-12-09 11:55:47.824630] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 [ 00:16:40.145 { 00:16:40.145 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:40.145 "subtype": "Discovery", 00:16:40.145 "listen_addresses": [], 00:16:40.145 "allow_any_host": true, 00:16:40.145 "hosts": [] 00:16:40.145 }, 00:16:40.145 { 00:16:40.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.145 "subtype": "NVMe", 00:16:40.145 "listen_addresses": [ 00:16:40.145 { 00:16:40.145 "trtype": "RDMA", 00:16:40.145 "adrfam": "IPv4", 00:16:40.145 "traddr": "192.168.100.8", 00:16:40.145 "trsvcid": "4420" 00:16:40.145 } 00:16:40.145 ], 00:16:40.145 "allow_any_host": true, 00:16:40.145 "hosts": [], 00:16:40.145 "serial_number": "SPDK00000000000001", 00:16:40.145 "model_number": "SPDK bdev Controller", 00:16:40.145 "max_namespaces": 2, 00:16:40.145 "min_cntlid": 1, 00:16:40.145 "max_cntlid": 65519, 00:16:40.145 "namespaces": [ 00:16:40.145 { 00:16:40.145 "nsid": 1, 00:16:40.145 "bdev_name": "Malloc0", 00:16:40.145 "name": "Malloc0", 00:16:40.145 "nguid": "01A0280CFE97484C9CDF63978A3A5CF8", 00:16:40.145 "uuid": "01a0280c-fe97-484c-9cdf-63978a3a5cf8" 00:16:40.145 } 00:16:40.145 ] 00:16:40.145 } 00:16:40.145 ] 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3247232 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:16:40.145 11:55:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 Malloc1 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.145 [ 00:16:40.145 { 00:16:40.145 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:40.145 "subtype": "Discovery", 00:16:40.145 "listen_addresses": [], 00:16:40.145 "allow_any_host": true, 00:16:40.145 "hosts": [] 00:16:40.145 }, 00:16:40.145 { 00:16:40.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.145 "subtype": "NVMe", 00:16:40.145 "listen_addresses": [ 00:16:40.145 { 00:16:40.145 "trtype": "RDMA", 00:16:40.145 "adrfam": "IPv4", 00:16:40.145 "traddr": "192.168.100.8", 00:16:40.145 "trsvcid": "4420" 00:16:40.145 } 00:16:40.145 ], 00:16:40.145 "allow_any_host": true, 00:16:40.145 "hosts": [], 00:16:40.145 "serial_number": "SPDK00000000000001", 00:16:40.145 "model_number": "SPDK bdev Controller", 00:16:40.145 "max_namespaces": 2, 00:16:40.145 "min_cntlid": 1, 00:16:40.145 "max_cntlid": 65519, 00:16:40.145 "namespaces": [ 00:16:40.145 { 00:16:40.145 "nsid": 1, 00:16:40.145 "bdev_name": "Malloc0", 00:16:40.145 "name": "Malloc0", 00:16:40.145 "nguid": "01A0280CFE97484C9CDF63978A3A5CF8", 00:16:40.145 "uuid": "01a0280c-fe97-484c-9cdf-63978a3a5cf8" 00:16:40.145 }, 00:16:40.145 { 00:16:40.145 "nsid": 2, 00:16:40.145 "bdev_name": "Malloc1", 00:16:40.145 "name": "Malloc1", 00:16:40.145 "nguid": "F2803D47AF654538BF7FECA788D3D7F6", 00:16:40.145 "uuid": "f2803d47-af65-4538-bf7f-eca788d3d7f6" 00:16:40.145 } 00:16:40.145 ] 00:16:40.145 } 00:16:40.145 ] 00:16:40.145 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.146 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3247232 00:16:40.146 Asynchronous Event Request test 00:16:40.146 Attaching to 192.168.100.8 00:16:40.146 Attached to 192.168.100.8 00:16:40.146 Registering asynchronous event callbacks... 00:16:40.146 Starting namespace attribute notice tests for all controllers... 00:16:40.146 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:40.146 aer_cb - Changed Namespace 00:16:40.146 Cleaning up... 00:16:40.146 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:40.146 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.146 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.146 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.146 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:40.146 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.146 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.404 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.404 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.404 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.404 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.404 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.404 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:40.404 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:40.404 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:40.404 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:40.405 rmmod nvme_rdma 00:16:40.405 rmmod nvme_fabrics 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3247206 ']' 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3247206 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3247206 ']' 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3247206 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3247206 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3247206' 00:16:40.405 killing process with pid 3247206 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3247206 00:16:40.405 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3247206 00:16:40.664 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:40.664 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:40.664 00:16:40.664 real 0m7.300s 00:16:40.664 user 0m5.995s 00:16:40.664 sys 0m4.816s 00:16:40.664 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.664 11:55:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.664 ************************************ 00:16:40.664 END TEST nvmf_aer 00:16:40.664 ************************************ 00:16:40.664 11:55:48 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:16:40.664 11:55:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:40.664 11:55:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.664 11:55:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.664 ************************************ 00:16:40.664 START TEST nvmf_async_init 00:16:40.664 ************************************ 00:16:40.664 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:16:40.664 * Looking for test storage... 00:16:40.924 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.924 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:40.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.925 --rc genhtml_branch_coverage=1 00:16:40.925 --rc genhtml_function_coverage=1 00:16:40.925 --rc genhtml_legend=1 00:16:40.925 --rc geninfo_all_blocks=1 00:16:40.925 --rc geninfo_unexecuted_blocks=1 00:16:40.925 00:16:40.925 ' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:40.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.925 --rc genhtml_branch_coverage=1 00:16:40.925 --rc genhtml_function_coverage=1 00:16:40.925 --rc genhtml_legend=1 00:16:40.925 --rc geninfo_all_blocks=1 00:16:40.925 --rc geninfo_unexecuted_blocks=1 00:16:40.925 00:16:40.925 ' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:40.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.925 --rc genhtml_branch_coverage=1 00:16:40.925 --rc genhtml_function_coverage=1 00:16:40.925 --rc genhtml_legend=1 00:16:40.925 --rc geninfo_all_blocks=1 00:16:40.925 --rc geninfo_unexecuted_blocks=1 00:16:40.925 00:16:40.925 ' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:40.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.925 --rc genhtml_branch_coverage=1 00:16:40.925 --rc genhtml_function_coverage=1 00:16:40.925 --rc genhtml_legend=1 00:16:40.925 --rc geninfo_all_blocks=1 00:16:40.925 --rc geninfo_unexecuted_blocks=1 00:16:40.925 00:16:40.925 ' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.925 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=cd8ff8168ba843b59d820fd5b2d307f1 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:16:40.925 11:55:48 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:16:47.495 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:16:47.495 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:16:47.495 Found net devices under 0000:da:00.0: mlx_0_0 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:47.495 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:16:47.496 Found net devices under 0000:da:00.1: mlx_0_1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:47.496 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:47.496 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:16:47.496 altname enp218s0f0np0 00:16:47.496 altname ens818f0np0 00:16:47.496 inet 192.168.100.8/24 scope global mlx_0_0 00:16:47.496 valid_lft forever preferred_lft forever 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:47.496 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:47.496 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:16:47.496 altname enp218s0f1np1 00:16:47.496 altname ens818f1np1 00:16:47.496 inet 192.168.100.9/24 scope global mlx_0_1 00:16:47.496 valid_lft forever preferred_lft forever 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:47.496 192.168.100.9' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:47.496 192.168.100.9' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:47.496 192.168.100.9' 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:47.496 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3250525 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3250525 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3250525 ']' 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 [2024-12-09 11:55:54.735852] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:16:47.497 [2024-12-09 11:55:54.735894] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.497 [2024-12-09 11:55:54.812277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.497 [2024-12-09 11:55:54.853011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.497 [2024-12-09 11:55:54.853046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.497 [2024-12-09 11:55:54.853054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.497 [2024-12-09 11:55:54.853060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.497 [2024-12-09 11:55:54.853065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.497 [2024-12-09 11:55:54.853634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.497 11:55:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 [2024-12-09 11:55:55.020635] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd70620/0xd74b10) succeed. 00:16:47.497 [2024-12-09 11:55:55.032354] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd71ad0/0xdb61b0) succeed. 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 null0 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g cd8ff8168ba843b59d820fd5b2d307f1 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 [2024-12-09 11:55:55.116456] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 nvme0n1 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 [ 00:16:47.497 { 00:16:47.497 "name": "nvme0n1", 00:16:47.497 "aliases": [ 00:16:47.497 "cd8ff816-8ba8-43b5-9d82-0fd5b2d307f1" 00:16:47.497 ], 00:16:47.497 "product_name": "NVMe disk", 00:16:47.497 "block_size": 512, 00:16:47.497 "num_blocks": 2097152, 00:16:47.497 "uuid": "cd8ff816-8ba8-43b5-9d82-0fd5b2d307f1", 00:16:47.497 "numa_id": 1, 00:16:47.497 "assigned_rate_limits": { 00:16:47.497 "rw_ios_per_sec": 0, 00:16:47.497 "rw_mbytes_per_sec": 0, 00:16:47.497 "r_mbytes_per_sec": 0, 00:16:47.497 "w_mbytes_per_sec": 0 00:16:47.497 }, 00:16:47.497 "claimed": false, 00:16:47.497 "zoned": false, 00:16:47.497 "supported_io_types": { 00:16:47.497 "read": true, 00:16:47.497 "write": true, 00:16:47.497 "unmap": false, 00:16:47.497 "flush": true, 00:16:47.497 "reset": true, 00:16:47.497 "nvme_admin": true, 00:16:47.497 "nvme_io": true, 00:16:47.497 "nvme_io_md": false, 00:16:47.497 "write_zeroes": true, 00:16:47.497 "zcopy": false, 00:16:47.497 "get_zone_info": false, 00:16:47.497 "zone_management": false, 00:16:47.497 "zone_append": false, 00:16:47.497 "compare": true, 00:16:47.497 "compare_and_write": true, 00:16:47.497 "abort": true, 00:16:47.497 "seek_hole": false, 00:16:47.497 "seek_data": false, 00:16:47.497 "copy": true, 00:16:47.497 "nvme_iov_md": false 00:16:47.497 }, 00:16:47.497 "memory_domains": [ 00:16:47.497 { 00:16:47.497 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:47.497 "dma_device_type": 0 00:16:47.497 } 00:16:47.497 ], 00:16:47.497 "driver_specific": { 00:16:47.497 "nvme": [ 00:16:47.497 { 00:16:47.497 "trid": { 00:16:47.497 "trtype": "RDMA", 00:16:47.497 "adrfam": "IPv4", 00:16:47.497 "traddr": "192.168.100.8", 00:16:47.497 "trsvcid": "4420", 00:16:47.497 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:47.497 }, 00:16:47.497 "ctrlr_data": { 00:16:47.497 "cntlid": 1, 00:16:47.497 "vendor_id": "0x8086", 00:16:47.497 "model_number": "SPDK bdev Controller", 00:16:47.497 "serial_number": "00000000000000000000", 00:16:47.497 "firmware_revision": "25.01", 00:16:47.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:47.497 "oacs": { 00:16:47.497 "security": 0, 00:16:47.497 "format": 0, 00:16:47.497 "firmware": 0, 00:16:47.497 "ns_manage": 0 00:16:47.497 }, 00:16:47.497 "multi_ctrlr": true, 00:16:47.497 "ana_reporting": false 00:16:47.497 }, 00:16:47.497 "vs": { 00:16:47.497 "nvme_version": "1.3" 00:16:47.497 }, 00:16:47.497 "ns_data": { 00:16:47.497 "id": 1, 00:16:47.497 "can_share": true 00:16:47.497 } 00:16:47.497 } 00:16:47.497 ], 00:16:47.497 "mp_policy": "active_passive" 00:16:47.497 } 00:16:47.497 } 00:16:47.497 ] 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.497 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 [2024-12-09 11:55:55.229762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:47.498 [2024-12-09 11:55:55.256580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:47.498 [2024-12-09 11:55:55.284979] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 [ 00:16:47.498 { 00:16:47.498 "name": "nvme0n1", 00:16:47.498 "aliases": [ 00:16:47.498 "cd8ff816-8ba8-43b5-9d82-0fd5b2d307f1" 00:16:47.498 ], 00:16:47.498 "product_name": "NVMe disk", 00:16:47.498 "block_size": 512, 00:16:47.498 "num_blocks": 2097152, 00:16:47.498 "uuid": "cd8ff816-8ba8-43b5-9d82-0fd5b2d307f1", 00:16:47.498 "numa_id": 1, 00:16:47.498 "assigned_rate_limits": { 00:16:47.498 "rw_ios_per_sec": 0, 00:16:47.498 "rw_mbytes_per_sec": 0, 00:16:47.498 "r_mbytes_per_sec": 0, 00:16:47.498 "w_mbytes_per_sec": 0 00:16:47.498 }, 00:16:47.498 "claimed": false, 00:16:47.498 "zoned": false, 00:16:47.498 "supported_io_types": { 00:16:47.498 "read": true, 00:16:47.498 "write": true, 00:16:47.498 "unmap": false, 00:16:47.498 "flush": true, 00:16:47.498 "reset": true, 00:16:47.498 "nvme_admin": true, 00:16:47.498 "nvme_io": true, 00:16:47.498 "nvme_io_md": false, 00:16:47.498 "write_zeroes": true, 00:16:47.498 "zcopy": false, 00:16:47.498 "get_zone_info": false, 00:16:47.498 "zone_management": false, 00:16:47.498 "zone_append": false, 00:16:47.498 "compare": true, 00:16:47.498 "compare_and_write": true, 00:16:47.498 "abort": true, 00:16:47.498 "seek_hole": false, 00:16:47.498 "seek_data": false, 00:16:47.498 "copy": true, 00:16:47.498 "nvme_iov_md": false 00:16:47.498 }, 00:16:47.498 "memory_domains": [ 00:16:47.498 { 00:16:47.498 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:47.498 "dma_device_type": 0 00:16:47.498 } 00:16:47.498 ], 00:16:47.498 "driver_specific": { 00:16:47.498 "nvme": [ 00:16:47.498 { 00:16:47.498 "trid": { 00:16:47.498 "trtype": "RDMA", 00:16:47.498 "adrfam": "IPv4", 00:16:47.498 "traddr": "192.168.100.8", 00:16:47.498 "trsvcid": "4420", 00:16:47.498 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:47.498 }, 00:16:47.498 "ctrlr_data": { 00:16:47.498 "cntlid": 2, 00:16:47.498 "vendor_id": "0x8086", 00:16:47.498 "model_number": "SPDK bdev Controller", 00:16:47.498 "serial_number": "00000000000000000000", 00:16:47.498 "firmware_revision": "25.01", 00:16:47.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:47.498 "oacs": { 00:16:47.498 "security": 0, 00:16:47.498 "format": 0, 00:16:47.498 "firmware": 0, 00:16:47.498 "ns_manage": 0 00:16:47.498 }, 00:16:47.498 "multi_ctrlr": true, 00:16:47.498 "ana_reporting": false 00:16:47.498 }, 00:16:47.498 "vs": { 00:16:47.498 "nvme_version": "1.3" 00:16:47.498 }, 00:16:47.498 "ns_data": { 00:16:47.498 "id": 1, 00:16:47.498 "can_share": true 00:16:47.498 } 00:16:47.498 } 00:16:47.498 ], 00:16:47.498 "mp_policy": "active_passive" 00:16:47.498 } 00:16:47.498 } 00:16:47.498 ] 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.JgAG0Vbko9 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.JgAG0Vbko9 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.JgAG0Vbko9 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 [2024-12-09 11:55:55.368577] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 [2024-12-09 11:55:55.388634] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:47.498 nvme0n1 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 [ 00:16:47.498 { 00:16:47.498 "name": "nvme0n1", 00:16:47.498 "aliases": [ 00:16:47.498 "cd8ff816-8ba8-43b5-9d82-0fd5b2d307f1" 00:16:47.498 ], 00:16:47.498 "product_name": "NVMe disk", 00:16:47.498 "block_size": 512, 00:16:47.498 "num_blocks": 2097152, 00:16:47.498 "uuid": "cd8ff816-8ba8-43b5-9d82-0fd5b2d307f1", 00:16:47.498 "numa_id": 1, 00:16:47.498 "assigned_rate_limits": { 00:16:47.498 "rw_ios_per_sec": 0, 00:16:47.498 "rw_mbytes_per_sec": 0, 00:16:47.498 "r_mbytes_per_sec": 0, 00:16:47.498 "w_mbytes_per_sec": 0 00:16:47.498 }, 00:16:47.498 "claimed": false, 00:16:47.498 "zoned": false, 00:16:47.498 "supported_io_types": { 00:16:47.498 "read": true, 00:16:47.498 "write": true, 00:16:47.498 "unmap": false, 00:16:47.498 "flush": true, 00:16:47.498 "reset": true, 00:16:47.498 "nvme_admin": true, 00:16:47.498 "nvme_io": true, 00:16:47.498 "nvme_io_md": false, 00:16:47.498 "write_zeroes": true, 00:16:47.498 "zcopy": false, 00:16:47.498 "get_zone_info": false, 00:16:47.498 "zone_management": false, 00:16:47.498 "zone_append": false, 00:16:47.498 "compare": true, 00:16:47.498 "compare_and_write": true, 00:16:47.498 "abort": true, 00:16:47.498 "seek_hole": false, 00:16:47.498 "seek_data": false, 00:16:47.498 "copy": true, 00:16:47.498 "nvme_iov_md": false 00:16:47.498 }, 00:16:47.498 "memory_domains": [ 00:16:47.498 { 00:16:47.498 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:47.498 "dma_device_type": 0 00:16:47.498 } 00:16:47.498 ], 00:16:47.498 "driver_specific": { 00:16:47.498 "nvme": [ 00:16:47.498 { 00:16:47.498 "trid": { 00:16:47.498 "trtype": "RDMA", 00:16:47.498 "adrfam": "IPv4", 00:16:47.498 "traddr": "192.168.100.8", 00:16:47.498 "trsvcid": "4421", 00:16:47.498 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:47.498 }, 00:16:47.498 "ctrlr_data": { 00:16:47.498 "cntlid": 3, 00:16:47.498 "vendor_id": "0x8086", 00:16:47.498 "model_number": "SPDK bdev Controller", 00:16:47.498 "serial_number": "00000000000000000000", 00:16:47.498 "firmware_revision": "25.01", 00:16:47.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:47.498 "oacs": { 00:16:47.498 "security": 0, 00:16:47.498 "format": 0, 00:16:47.498 "firmware": 0, 00:16:47.498 "ns_manage": 0 00:16:47.498 }, 00:16:47.498 "multi_ctrlr": true, 00:16:47.498 "ana_reporting": false 00:16:47.498 }, 00:16:47.498 "vs": { 00:16:47.498 "nvme_version": "1.3" 00:16:47.498 }, 00:16:47.498 "ns_data": { 00:16:47.498 "id": 1, 00:16:47.498 "can_share": true 00:16:47.498 } 00:16:47.499 } 00:16:47.499 ], 00:16:47.499 "mp_policy": "active_passive" 00:16:47.499 } 00:16:47.499 } 00:16:47.499 ] 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.JgAG0Vbko9 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.499 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:47.499 rmmod nvme_rdma 00:16:47.499 rmmod nvme_fabrics 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3250525 ']' 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3250525 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3250525 ']' 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3250525 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3250525 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3250525' 00:16:47.758 killing process with pid 3250525 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3250525 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3250525 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:47.758 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:48.017 00:16:48.017 real 0m7.179s 00:16:48.017 user 0m2.995s 00:16:48.017 sys 0m4.688s 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:48.017 ************************************ 00:16:48.017 END TEST nvmf_async_init 00:16:48.017 ************************************ 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.017 ************************************ 00:16:48.017 START TEST dma 00:16:48.017 ************************************ 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:16:48.017 * Looking for test storage... 00:16:48.017 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:16:48.017 11:55:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:48.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.017 --rc genhtml_branch_coverage=1 00:16:48.017 --rc genhtml_function_coverage=1 00:16:48.017 --rc genhtml_legend=1 00:16:48.017 --rc geninfo_all_blocks=1 00:16:48.017 --rc geninfo_unexecuted_blocks=1 00:16:48.017 00:16:48.017 ' 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:48.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.017 --rc genhtml_branch_coverage=1 00:16:48.017 --rc genhtml_function_coverage=1 00:16:48.017 --rc genhtml_legend=1 00:16:48.017 --rc geninfo_all_blocks=1 00:16:48.017 --rc geninfo_unexecuted_blocks=1 00:16:48.017 00:16:48.017 ' 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:48.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.017 --rc genhtml_branch_coverage=1 00:16:48.017 --rc genhtml_function_coverage=1 00:16:48.017 --rc genhtml_legend=1 00:16:48.017 --rc geninfo_all_blocks=1 00:16:48.017 --rc geninfo_unexecuted_blocks=1 00:16:48.017 00:16:48.017 ' 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:48.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.017 --rc genhtml_branch_coverage=1 00:16:48.017 --rc genhtml_function_coverage=1 00:16:48.017 --rc genhtml_legend=1 00:16:48.017 --rc geninfo_all_blocks=1 00:16:48.017 --rc geninfo_unexecuted_blocks=1 00:16:48.017 00:16:48.017 ' 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.017 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.277 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.277 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:48.278 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:48.278 11:55:56 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:16:48.278 11:55:56 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:54.847 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:16:54.848 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:16:54.848 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:16:54.848 Found net devices under 0000:da:00.0: mlx_0_0 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:16:54.848 Found net devices under 0000:da:00.1: mlx_0_1 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:54.848 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:54.848 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:16:54.848 altname enp218s0f0np0 00:16:54.848 altname ens818f0np0 00:16:54.848 inet 192.168.100.8/24 scope global mlx_0_0 00:16:54.848 valid_lft forever preferred_lft forever 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:54.848 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:54.848 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:16:54.848 altname enp218s0f1np1 00:16:54.848 altname ens818f1np1 00:16:54.848 inet 192.168.100.9/24 scope global mlx_0_1 00:16:54.848 valid_lft forever preferred_lft forever 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:54.848 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:54.849 192.168.100.9' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:54.849 192.168.100.9' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:54.849 192.168.100.9' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=3253840 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 3253840 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 3253840 ']' 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.849 11:56:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:54.849 [2024-12-09 11:56:01.970120] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:16:54.849 [2024-12-09 11:56:01.970178] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.849 [2024-12-09 11:56:02.046642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:54.849 [2024-12-09 11:56:02.086217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.849 [2024-12-09 11:56:02.086252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.849 [2024-12-09 11:56:02.086259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.849 [2024-12-09 11:56:02.086266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.849 [2024-12-09 11:56:02.086270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.849 [2024-12-09 11:56:02.087516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.849 [2024-12-09 11:56:02.087517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:54.849 [2024-12-09 11:56:02.253871] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cc81f0/0x1ccc6e0) succeed. 00:16:54.849 [2024-12-09 11:56:02.263483] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cc9740/0x1d0dd80) succeed. 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:54.849 Malloc0 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:54.849 [2024-12-09 11:56:02.413834] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:54.849 { 00:16:54.849 "params": { 00:16:54.849 "name": "Nvme$subsystem", 00:16:54.849 "trtype": "$TEST_TRANSPORT", 00:16:54.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.849 "adrfam": "ipv4", 00:16:54.849 "trsvcid": "$NVMF_PORT", 00:16:54.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.849 "hdgst": ${hdgst:-false}, 00:16:54.849 "ddgst": ${ddgst:-false} 00:16:54.849 }, 00:16:54.849 "method": "bdev_nvme_attach_controller" 00:16:54.849 } 00:16:54.849 EOF 00:16:54.849 )") 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:16:54.849 11:56:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:54.849 "params": { 00:16:54.849 "name": "Nvme0", 00:16:54.849 "trtype": "rdma", 00:16:54.849 "traddr": "192.168.100.8", 00:16:54.849 "adrfam": "ipv4", 00:16:54.849 "trsvcid": "4420", 00:16:54.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:54.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:54.849 "hdgst": false, 00:16:54.849 "ddgst": false 00:16:54.849 }, 00:16:54.849 "method": "bdev_nvme_attach_controller" 00:16:54.849 }' 00:16:54.849 [2024-12-09 11:56:02.460447] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:16:54.849 [2024-12-09 11:56:02.460488] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253876 ] 00:16:54.849 [2024-12-09 11:56:02.538599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:54.849 [2024-12-09 11:56:02.580117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.849 [2024-12-09 11:56:02.580118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.118 bdev Nvme0n1 reports 1 memory domains 00:17:00.118 bdev Nvme0n1 supports RDMA memory domain 00:17:00.118 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:00.118 ========================================================================== 00:17:00.118 Latency [us] 00:17:00.118 IOPS MiB/s Average min max 00:17:00.118 Core 2: 21133.55 82.55 756.23 260.49 8557.91 00:17:00.118 Core 3: 20975.62 81.94 761.94 262.84 8611.34 00:17:00.118 ========================================================================== 00:17:00.118 Total : 42109.18 164.49 759.07 260.49 8611.34 00:17:00.118 00:17:00.118 Total operations: 210636, translate 210636 pull_push 0 memzero 0 00:17:00.118 11:56:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:17:00.118 11:56:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:17:00.118 11:56:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:17:00.118 [2024-12-09 11:56:07.995856] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:17:00.118 [2024-12-09 11:56:07.995910] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254787 ] 00:17:00.118 [2024-12-09 11:56:08.071984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:00.118 [2024-12-09 11:56:08.110750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.118 [2024-12-09 11:56:08.110752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.383 bdev Malloc0 reports 2 memory domains 00:17:05.383 bdev Malloc0 doesn't support RDMA memory domain 00:17:05.383 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:05.383 ========================================================================== 00:17:05.383 Latency [us] 00:17:05.383 IOPS MiB/s Average min max 00:17:05.383 Core 2: 13779.01 53.82 1160.45 469.81 2305.43 00:17:05.383 Core 3: 13705.43 53.54 1166.68 425.32 2029.47 00:17:05.383 ========================================================================== 00:17:05.383 Total : 27484.44 107.36 1163.56 425.32 2305.43 00:17:05.383 00:17:05.383 Total operations: 137474, translate 0 pull_push 549896 memzero 0 00:17:05.383 11:56:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:17:05.383 11:56:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:17:05.383 11:56:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:17:05.383 11:56:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:17:05.383 Ignoring -M option 00:17:05.383 [2024-12-09 11:56:13.430604] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:17:05.383 [2024-12-09 11:56:13.430659] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255698 ] 00:17:05.642 [2024-12-09 11:56:13.507727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:05.642 [2024-12-09 11:56:13.546286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.642 [2024-12-09 11:56:13.546288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.204 bdev 7f169fda-4364-4df4-81c5-1a6f5c11d6d5 reports 1 memory domains 00:17:12.204 bdev 7f169fda-4364-4df4-81c5-1a6f5c11d6d5 supports RDMA memory domain 00:17:12.204 Initialization complete, running randread IO for 5 sec on 2 cores 00:17:12.204 ========================================================================== 00:17:12.204 Latency [us] 00:17:12.204 IOPS MiB/s Average min max 00:17:12.204 Core 2: 74777.37 292.10 213.20 90.07 1725.29 00:17:12.204 Core 3: 72950.78 284.96 218.52 81.94 1748.60 00:17:12.204 ========================================================================== 00:17:12.204 Total : 147728.15 577.06 215.82 81.94 1748.60 00:17:12.204 00:17:12.204 Total operations: 738722, translate 0 pull_push 0 memzero 738722 00:17:12.204 11:56:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:17:12.204 [2024-12-09 11:56:19.092656] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:13.579 Initializing NVMe Controllers 00:17:13.579 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:13.579 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:17:13.579 Initialization complete. Launching workers. 00:17:13.579 ======================================================== 00:17:13.579 Latency(us) 00:17:13.579 Device Information : IOPS MiB/s Average min max 00:17:13.579 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7973.14 4987.10 11969.32 00:17:13.579 ======================================================== 00:17:13.579 Total : 2016.00 7.88 7973.14 4987.10 11969.32 00:17:13.579 00:17:13.579 11:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:17:13.579 11:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:17:13.579 11:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:17:13.579 11:56:21 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:17:13.579 [2024-12-09 11:56:21.430955] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:17:13.579 [2024-12-09 11:56:21.430997] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257072 ] 00:17:13.579 [2024-12-09 11:56:21.509632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:13.579 [2024-12-09 11:56:21.550745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.579 [2024-12-09 11:56:21.550748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.143 bdev 856ebf47-85a9-4515-b76a-9519675b14c5 reports 1 memory domains 00:17:20.143 bdev 856ebf47-85a9-4515-b76a-9519675b14c5 supports RDMA memory domain 00:17:20.143 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:20.143 ========================================================================== 00:17:20.143 Latency [us] 00:17:20.143 IOPS MiB/s Average min max 00:17:20.143 Core 2: 18701.60 73.05 854.88 54.95 10085.37 00:17:20.143 Core 3: 18490.84 72.23 864.60 14.65 9743.81 00:17:20.143 ========================================================================== 00:17:20.143 Total : 37192.44 145.28 859.71 14.65 10085.37 00:17:20.143 00:17:20.143 Total operations: 185992, translate 185886 pull_push 0 memzero 106 00:17:20.143 11:56:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:17:20.143 11:56:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:17:20.143 11:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:20.143 11:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:17:20.143 11:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:20.143 11:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:20.143 11:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:17:20.143 11:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.143 11:56:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:20.143 rmmod nvme_rdma 00:17:20.143 rmmod nvme_fabrics 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 3253840 ']' 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 3253840 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 3253840 ']' 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 3253840 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3253840 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3253840' 00:17:20.143 killing process with pid 3253840 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 3253840 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 3253840 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:20.143 00:17:20.143 real 0m31.482s 00:17:20.143 user 1m34.949s 00:17:20.143 sys 0m5.439s 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:20.143 ************************************ 00:17:20.143 END TEST dma 00:17:20.143 ************************************ 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.143 ************************************ 00:17:20.143 START TEST nvmf_identify 00:17:20.143 ************************************ 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:17:20.143 * Looking for test storage... 00:17:20.143 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:20.143 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:20.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.144 --rc genhtml_branch_coverage=1 00:17:20.144 --rc genhtml_function_coverage=1 00:17:20.144 --rc genhtml_legend=1 00:17:20.144 --rc geninfo_all_blocks=1 00:17:20.144 --rc geninfo_unexecuted_blocks=1 00:17:20.144 00:17:20.144 ' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:20.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.144 --rc genhtml_branch_coverage=1 00:17:20.144 --rc genhtml_function_coverage=1 00:17:20.144 --rc genhtml_legend=1 00:17:20.144 --rc geninfo_all_blocks=1 00:17:20.144 --rc geninfo_unexecuted_blocks=1 00:17:20.144 00:17:20.144 ' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:20.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.144 --rc genhtml_branch_coverage=1 00:17:20.144 --rc genhtml_function_coverage=1 00:17:20.144 --rc genhtml_legend=1 00:17:20.144 --rc geninfo_all_blocks=1 00:17:20.144 --rc geninfo_unexecuted_blocks=1 00:17:20.144 00:17:20.144 ' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:20.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.144 --rc genhtml_branch_coverage=1 00:17:20.144 --rc genhtml_function_coverage=1 00:17:20.144 --rc genhtml_legend=1 00:17:20.144 --rc geninfo_all_blocks=1 00:17:20.144 --rc geninfo_unexecuted_blocks=1 00:17:20.144 00:17:20.144 ' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:20.144 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:20.144 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:17:20.145 11:56:27 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:25.417 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:25.417 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:25.417 Found net devices under 0000:da:00.0: mlx_0_0 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:25.417 Found net devices under 0000:da:00.1: mlx_0_1 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:25.417 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:25.418 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:25.418 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:17:25.418 altname enp218s0f0np0 00:17:25.418 altname ens818f0np0 00:17:25.418 inet 192.168.100.8/24 scope global mlx_0_0 00:17:25.418 valid_lft forever preferred_lft forever 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:25.418 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:25.418 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:17:25.418 altname enp218s0f1np1 00:17:25.418 altname ens818f1np1 00:17:25.418 inet 192.168.100.9/24 scope global mlx_0_1 00:17:25.418 valid_lft forever preferred_lft forever 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:25.418 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:25.677 192.168.100.9' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:25.677 192.168.100.9' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:25.677 192.168.100.9' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3261152 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3261152 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3261152 ']' 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.677 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.677 [2024-12-09 11:56:33.584417] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:17:25.677 [2024-12-09 11:56:33.584473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.677 [2024-12-09 11:56:33.665781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:25.677 [2024-12-09 11:56:33.707447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.678 [2024-12-09 11:56:33.707486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.678 [2024-12-09 11:56:33.707493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.678 [2024-12-09 11:56:33.707499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.678 [2024-12-09 11:56:33.707504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.678 [2024-12-09 11:56:33.708962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.678 [2024-12-09 11:56:33.709091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.678 [2024-12-09 11:56:33.709176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.678 [2024-12-09 11:56:33.709177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.936 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.936 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:25.936 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:25.936 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.936 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.936 [2024-12-09 11:56:33.842624] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1abc940/0x1ac0e30) succeed. 00:17:25.936 [2024-12-09 11:56:33.854228] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1abdfd0/0x1b024d0) succeed. 00:17:25.936 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.936 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:25.936 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.936 11:56:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.200 Malloc0 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.200 [2024-12-09 11:56:34.069966] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.200 [ 00:17:26.200 { 00:17:26.200 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:26.200 "subtype": "Discovery", 00:17:26.200 "listen_addresses": [ 00:17:26.200 { 00:17:26.200 "trtype": "RDMA", 00:17:26.200 "adrfam": "IPv4", 00:17:26.200 "traddr": "192.168.100.8", 00:17:26.200 "trsvcid": "4420" 00:17:26.200 } 00:17:26.200 ], 00:17:26.200 "allow_any_host": true, 00:17:26.200 "hosts": [] 00:17:26.200 }, 00:17:26.200 { 00:17:26.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.200 "subtype": "NVMe", 00:17:26.200 "listen_addresses": [ 00:17:26.200 { 00:17:26.200 "trtype": "RDMA", 00:17:26.200 "adrfam": "IPv4", 00:17:26.200 "traddr": "192.168.100.8", 00:17:26.200 "trsvcid": "4420" 00:17:26.200 } 00:17:26.200 ], 00:17:26.200 "allow_any_host": true, 00:17:26.200 "hosts": [], 00:17:26.200 "serial_number": "SPDK00000000000001", 00:17:26.200 "model_number": "SPDK bdev Controller", 00:17:26.200 "max_namespaces": 32, 00:17:26.200 "min_cntlid": 1, 00:17:26.200 "max_cntlid": 65519, 00:17:26.200 "namespaces": [ 00:17:26.200 { 00:17:26.200 "nsid": 1, 00:17:26.200 "bdev_name": "Malloc0", 00:17:26.200 "name": "Malloc0", 00:17:26.200 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:26.200 "eui64": "ABCDEF0123456789", 00:17:26.200 "uuid": "b7212b5f-a0dd-4cbe-8822-8b1b7c586f88" 00:17:26.200 } 00:17:26.200 ] 00:17:26.200 } 00:17:26.200 ] 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.200 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:26.200 [2024-12-09 11:56:34.121363] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:17:26.200 [2024-12-09 11:56:34.121404] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261300 ] 00:17:26.200 [2024-12-09 11:56:34.184024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:26.200 [2024-12-09 11:56:34.184093] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:26.200 [2024-12-09 11:56:34.184106] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:26.200 [2024-12-09 11:56:34.184109] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:26.200 [2024-12-09 11:56:34.184137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:26.200 [2024-12-09 11:56:34.198333] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:26.200 [2024-12-09 11:56:34.213928] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:26.200 [2024-12-09 11:56:34.213938] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:26.200 [2024-12-09 11:56:34.213947] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213953] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213958] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213962] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213967] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213971] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213976] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213980] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213985] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213989] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213994] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.213998] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214003] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214007] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214012] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214016] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214020] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214025] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214029] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214034] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214038] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214043] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214047] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214052] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214056] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214061] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214065] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214070] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214074] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214079] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214083] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214087] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:26.200 [2024-12-09 11:56:34.214093] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:26.200 [2024-12-09 11:56:34.214096] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:26.200 [2024-12-09 11:56:34.214119] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.200 [2024-12-09 11:56:34.214131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x182c00 00:17:26.200 [2024-12-09 11:56:34.218813] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.218821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.218828] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.218834] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:26.201 [2024-12-09 11:56:34.218840] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:26.201 [2024-12-09 11:56:34.218845] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:26.201 [2024-12-09 11:56:34.218858] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.218865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.201 [2024-12-09 11:56:34.218893] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.218898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.218903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:26.201 [2024-12-09 11:56:34.218908] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.218913] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:26.201 [2024-12-09 11:56:34.218919] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.218925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.201 [2024-12-09 11:56:34.218947] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.218951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.218956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:26.201 [2024-12-09 11:56:34.218960] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.218966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:26.201 [2024-12-09 11:56:34.218972] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.218978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.201 [2024-12-09 11:56:34.218999] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.219004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.219009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:26.201 [2024-12-09 11:56:34.219015] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219022] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.201 [2024-12-09 11:56:34.219051] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.219055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.219060] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:26.201 [2024-12-09 11:56:34.219065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:26.201 [2024-12-09 11:56:34.219069] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:26.201 [2024-12-09 11:56:34.219179] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:26.201 [2024-12-09 11:56:34.219183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:26.201 [2024-12-09 11:56:34.219191] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.201 [2024-12-09 11:56:34.219216] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.219221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.219226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:26.201 [2024-12-09 11:56:34.219230] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219237] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.201 [2024-12-09 11:56:34.219262] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.219266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.219270] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:26.201 [2024-12-09 11:56:34.219275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:26.201 [2024-12-09 11:56:34.219279] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219284] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:26.201 [2024-12-09 11:56:34.219295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:26.201 [2024-12-09 11:56:34.219305] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182c00 00:17:26.201 [2024-12-09 11:56:34.219348] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.219353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.219360] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:26.201 [2024-12-09 11:56:34.219364] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:26.201 [2024-12-09 11:56:34.219368] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:26.201 [2024-12-09 11:56:34.219373] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:26.201 [2024-12-09 11:56:34.219379] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:26.201 [2024-12-09 11:56:34.219384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:26.201 [2024-12-09 11:56:34.219388] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:26.201 [2024-12-09 11:56:34.219400] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.201 [2024-12-09 11:56:34.219434] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.219439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.219447] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.201 [2024-12-09 11:56:34.219459] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.201 [2024-12-09 11:56:34.219470] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.201 [2024-12-09 11:56:34.219481] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.201 [2024-12-09 11:56:34.219490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:26.201 [2024-12-09 11:56:34.219495] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:26.201 [2024-12-09 11:56:34.219509] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.201 [2024-12-09 11:56:34.219534] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.201 [2024-12-09 11:56:34.219539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:26.201 [2024-12-09 11:56:34.219546] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:26.201 [2024-12-09 11:56:34.219551] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:26.201 [2024-12-09 11:56:34.219555] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219562] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.201 [2024-12-09 11:56:34.219569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182c00 00:17:26.202 [2024-12-09 11:56:34.219595] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.202 [2024-12-09 11:56:34.219599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:26.202 [2024-12-09 11:56:34.219605] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182c00 00:17:26.202 [2024-12-09 11:56:34.219612] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:26.202 [2024-12-09 11:56:34.219635] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.202 [2024-12-09 11:56:34.219642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x400 key:0x182c00 00:17:26.202 [2024-12-09 11:56:34.219649] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x182c00 00:17:26.202 [2024-12-09 11:56:34.219654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.202 [2024-12-09 11:56:34.219678] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.202 [2024-12-09 11:56:34.219683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:26.202 [2024-12-09 11:56:34.219693] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x182c00 00:17:26.202 [2024-12-09 11:56:34.219698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0xc00 key:0x182c00 00:17:26.202 [2024-12-09 11:56:34.219703] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182c00 00:17:26.202 [2024-12-09 11:56:34.219708] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.202 [2024-12-09 11:56:34.219712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:26.202 [2024-12-09 11:56:34.219717] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182c00 00:17:26.202 [2024-12-09 11:56:34.219736] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.202 [2024-12-09 11:56:34.219741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:26.202 [2024-12-09 11:56:34.219749] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x182c00 00:17:26.202 [2024-12-09 11:56:34.219757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x8 key:0x182c00 00:17:26.202 [2024-12-09 11:56:34.219761] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182c00 00:17:26.202 [2024-12-09 11:56:34.219783] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.202 [2024-12-09 11:56:34.219788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:26.202 [2024-12-09 11:56:34.219795] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182c00 00:17:26.202 ===================================================== 00:17:26.202 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:26.202 ===================================================== 00:17:26.202 Controller Capabilities/Features 00:17:26.202 ================================ 00:17:26.202 Vendor ID: 0000 00:17:26.202 Subsystem Vendor ID: 0000 00:17:26.202 Serial Number: .................... 00:17:26.202 Model Number: ........................................ 00:17:26.202 Firmware Version: 25.01 00:17:26.202 Recommended Arb Burst: 0 00:17:26.202 IEEE OUI Identifier: 00 00 00 00:17:26.202 Multi-path I/O 00:17:26.202 May have multiple subsystem ports: No 00:17:26.202 May have multiple controllers: No 00:17:26.202 Associated with SR-IOV VF: No 00:17:26.202 Max Data Transfer Size: 131072 00:17:26.202 Max Number of Namespaces: 0 00:17:26.202 Max Number of I/O Queues: 1024 00:17:26.202 NVMe Specification Version (VS): 1.3 00:17:26.202 NVMe Specification Version (Identify): 1.3 00:17:26.202 Maximum Queue Entries: 128 00:17:26.202 Contiguous Queues Required: Yes 00:17:26.202 Arbitration Mechanisms Supported 00:17:26.202 Weighted Round Robin: Not Supported 00:17:26.202 Vendor Specific: Not Supported 00:17:26.202 Reset Timeout: 15000 ms 00:17:26.202 Doorbell Stride: 4 bytes 00:17:26.202 NVM Subsystem Reset: Not Supported 00:17:26.202 Command Sets Supported 00:17:26.202 NVM Command Set: Supported 00:17:26.202 Boot Partition: Not Supported 00:17:26.202 Memory Page Size Minimum: 4096 bytes 00:17:26.202 Memory Page Size Maximum: 4096 bytes 00:17:26.202 Persistent Memory Region: Not Supported 00:17:26.202 Optional Asynchronous Events Supported 00:17:26.202 Namespace Attribute Notices: Not Supported 00:17:26.202 Firmware Activation Notices: Not Supported 00:17:26.202 ANA Change Notices: Not Supported 00:17:26.202 PLE Aggregate Log Change Notices: Not Supported 00:17:26.202 LBA Status Info Alert Notices: Not Supported 00:17:26.202 EGE Aggregate Log Change Notices: Not Supported 00:17:26.202 Normal NVM Subsystem Shutdown event: Not Supported 00:17:26.202 Zone Descriptor Change Notices: Not Supported 00:17:26.202 Discovery Log Change Notices: Supported 00:17:26.202 Controller Attributes 00:17:26.202 128-bit Host Identifier: Not Supported 00:17:26.202 Non-Operational Permissive Mode: Not Supported 00:17:26.202 NVM Sets: Not Supported 00:17:26.202 Read Recovery Levels: Not Supported 00:17:26.202 Endurance Groups: Not Supported 00:17:26.202 Predictable Latency Mode: Not Supported 00:17:26.202 Traffic Based Keep ALive: Not Supported 00:17:26.202 Namespace Granularity: Not Supported 00:17:26.202 SQ Associations: Not Supported 00:17:26.202 UUID List: Not Supported 00:17:26.202 Multi-Domain Subsystem: Not Supported 00:17:26.202 Fixed Capacity Management: Not Supported 00:17:26.202 Variable Capacity Management: Not Supported 00:17:26.202 Delete Endurance Group: Not Supported 00:17:26.202 Delete NVM Set: Not Supported 00:17:26.202 Extended LBA Formats Supported: Not Supported 00:17:26.202 Flexible Data Placement Supported: Not Supported 00:17:26.202 00:17:26.202 Controller Memory Buffer Support 00:17:26.202 ================================ 00:17:26.202 Supported: No 00:17:26.202 00:17:26.202 Persistent Memory Region Support 00:17:26.202 ================================ 00:17:26.202 Supported: No 00:17:26.202 00:17:26.202 Admin Command Set Attributes 00:17:26.202 ============================ 00:17:26.202 Security Send/Receive: Not Supported 00:17:26.202 Format NVM: Not Supported 00:17:26.202 Firmware Activate/Download: Not Supported 00:17:26.202 Namespace Management: Not Supported 00:17:26.202 Device Self-Test: Not Supported 00:17:26.202 Directives: Not Supported 00:17:26.202 NVMe-MI: Not Supported 00:17:26.202 Virtualization Management: Not Supported 00:17:26.202 Doorbell Buffer Config: Not Supported 00:17:26.202 Get LBA Status Capability: Not Supported 00:17:26.202 Command & Feature Lockdown Capability: Not Supported 00:17:26.202 Abort Command Limit: 1 00:17:26.202 Async Event Request Limit: 4 00:17:26.202 Number of Firmware Slots: N/A 00:17:26.202 Firmware Slot 1 Read-Only: N/A 00:17:26.202 Firmware Activation Without Reset: N/A 00:17:26.202 Multiple Update Detection Support: N/A 00:17:26.202 Firmware Update Granularity: No Information Provided 00:17:26.202 Per-Namespace SMART Log: No 00:17:26.202 Asymmetric Namespace Access Log Page: Not Supported 00:17:26.202 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:26.202 Command Effects Log Page: Not Supported 00:17:26.202 Get Log Page Extended Data: Supported 00:17:26.202 Telemetry Log Pages: Not Supported 00:17:26.202 Persistent Event Log Pages: Not Supported 00:17:26.202 Supported Log Pages Log Page: May Support 00:17:26.202 Commands Supported & Effects Log Page: Not Supported 00:17:26.202 Feature Identifiers & Effects Log Page:May Support 00:17:26.202 NVMe-MI Commands & Effects Log Page: May Support 00:17:26.202 Data Area 4 for Telemetry Log: Not Supported 00:17:26.202 Error Log Page Entries Supported: 128 00:17:26.202 Keep Alive: Not Supported 00:17:26.202 00:17:26.202 NVM Command Set Attributes 00:17:26.202 ========================== 00:17:26.202 Submission Queue Entry Size 00:17:26.202 Max: 1 00:17:26.202 Min: 1 00:17:26.202 Completion Queue Entry Size 00:17:26.202 Max: 1 00:17:26.202 Min: 1 00:17:26.202 Number of Namespaces: 0 00:17:26.202 Compare Command: Not Supported 00:17:26.202 Write Uncorrectable Command: Not Supported 00:17:26.202 Dataset Management Command: Not Supported 00:17:26.202 Write Zeroes Command: Not Supported 00:17:26.203 Set Features Save Field: Not Supported 00:17:26.203 Reservations: Not Supported 00:17:26.203 Timestamp: Not Supported 00:17:26.203 Copy: Not Supported 00:17:26.203 Volatile Write Cache: Not Present 00:17:26.203 Atomic Write Unit (Normal): 1 00:17:26.203 Atomic Write Unit (PFail): 1 00:17:26.203 Atomic Compare & Write Unit: 1 00:17:26.203 Fused Compare & Write: Supported 00:17:26.203 Scatter-Gather List 00:17:26.203 SGL Command Set: Supported 00:17:26.203 SGL Keyed: Supported 00:17:26.203 SGL Bit Bucket Descriptor: Not Supported 00:17:26.203 SGL Metadata Pointer: Not Supported 00:17:26.203 Oversized SGL: Not Supported 00:17:26.203 SGL Metadata Address: Not Supported 00:17:26.203 SGL Offset: Supported 00:17:26.203 Transport SGL Data Block: Not Supported 00:17:26.203 Replay Protected Memory Block: Not Supported 00:17:26.203 00:17:26.203 Firmware Slot Information 00:17:26.203 ========================= 00:17:26.203 Active slot: 0 00:17:26.203 00:17:26.203 00:17:26.203 Error Log 00:17:26.203 ========= 00:17:26.203 00:17:26.203 Active Namespaces 00:17:26.203 ================= 00:17:26.203 Discovery Log Page 00:17:26.203 ================== 00:17:26.203 Generation Counter: 2 00:17:26.203 Number of Records: 2 00:17:26.203 Record Format: 0 00:17:26.203 00:17:26.203 Discovery Log Entry 0 00:17:26.203 ---------------------- 00:17:26.203 Transport Type: 1 (RDMA) 00:17:26.203 Address Family: 1 (IPv4) 00:17:26.203 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:26.203 Entry Flags: 00:17:26.203 Duplicate Returned Information: 1 00:17:26.203 Explicit Persistent Connection Support for Discovery: 1 00:17:26.203 Transport Requirements: 00:17:26.203 Secure Channel: Not Required 00:17:26.203 Port ID: 0 (0x0000) 00:17:26.203 Controller ID: 65535 (0xffff) 00:17:26.203 Admin Max SQ Size: 128 00:17:26.203 Transport Service Identifier: 4420 00:17:26.203 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:26.203 Transport Address: 192.168.100.8 00:17:26.203 Transport Specific Address Subtype - RDMA 00:17:26.203 RDMA QP Service Type: 1 (Reliable Connected) 00:17:26.203 RDMA Provider Type: 1 (No provider specified) 00:17:26.203 RDMA CM Service: 1 (RDMA_CM) 00:17:26.203 Discovery Log Entry 1 00:17:26.203 ---------------------- 00:17:26.203 Transport Type: 1 (RDMA) 00:17:26.203 Address Family: 1 (IPv4) 00:17:26.203 Subsystem Type: 2 (NVM Subsystem) 00:17:26.203 Entry Flags: 00:17:26.203 Duplicate Returned Information: 0 00:17:26.203 Explicit Persistent Connection Support for Discovery: 0 00:17:26.203 Transport Requirements: 00:17:26.203 Secure Channel: Not Required 00:17:26.203 Port ID: 0 (0x0000) 00:17:26.203 Controller ID: 65535 (0xffff) 00:17:26.203 Admin Max SQ Size: [2024-12-09 11:56:34.219867] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:26.203 [2024-12-09 11:56:34.219876] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10490 doesn't match qid 00:17:26.203 [2024-12-09 11:56:34.219888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:9c7369d0 sqhd:0880 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.219894] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10490 doesn't match qid 00:17:26.203 [2024-12-09 11:56:34.219901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:9c7369d0 sqhd:0880 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.219905] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10490 doesn't match qid 00:17:26.203 [2024-12-09 11:56:34.219911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:9c7369d0 sqhd:0880 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.219916] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10490 doesn't match qid 00:17:26.203 [2024-12-09 11:56:34.219922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:9c7369d0 sqhd:0880 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.219930] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.219937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.203 [2024-12-09 11:56:34.219952] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.203 [2024-12-09 11:56:34.219957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.219966] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.219972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.203 [2024-12-09 11:56:34.219977] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.219996] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.203 [2024-12-09 11:56:34.220001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.220006] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:26.203 [2024-12-09 11:56:34.220010] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:26.203 [2024-12-09 11:56:34.220014] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220021] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.203 [2024-12-09 11:56:34.220045] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.203 [2024-12-09 11:56:34.220049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.220054] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220062] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.203 [2024-12-09 11:56:34.220088] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.203 [2024-12-09 11:56:34.220093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.220098] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220106] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.203 [2024-12-09 11:56:34.220129] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.203 [2024-12-09 11:56:34.220134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.220139] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220147] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.203 [2024-12-09 11:56:34.220174] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.203 [2024-12-09 11:56:34.220178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.220183] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220191] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.203 [2024-12-09 11:56:34.220216] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.203 [2024-12-09 11:56:34.220221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.220227] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220235] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.203 [2024-12-09 11:56:34.220267] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.203 [2024-12-09 11:56:34.220273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:26.203 [2024-12-09 11:56:34.220279] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182c00 00:17:26.203 [2024-12-09 11:56:34.220287] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220314] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220325] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220332] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220356] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220366] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220373] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220397] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220406] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220413] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220440] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220449] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220457] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220484] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220494] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220501] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220526] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220536] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220543] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220567] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220576] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220583] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220611] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220621] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220627] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220654] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220663] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220670] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220699] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220708] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220715] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220740] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220749] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220756] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220778] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220787] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220794] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220827] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220836] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220844] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220867] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220876] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220883] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220908] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.204 [2024-12-09 11:56:34.220913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:26.204 [2024-12-09 11:56:34.220917] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220924] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.204 [2024-12-09 11:56:34.220931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.204 [2024-12-09 11:56:34.220948] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.220952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.220957] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.220964] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.220970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.220992] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.220997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221002] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221009] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221040] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221049] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221059] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221084] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221093] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221100] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221125] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221134] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221141] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221165] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221174] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221181] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221208] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221217] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221224] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221249] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221259] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221266] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221294] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221303] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221312] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221335] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221344] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221352] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221375] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221384] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221391] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221418] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221427] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221434] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221458] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221467] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221474] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221499] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221508] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221515] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221538] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221556] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221563] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221588] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221597] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221605] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.205 [2024-12-09 11:56:34.221629] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.205 [2024-12-09 11:56:34.221634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:26.205 [2024-12-09 11:56:34.221639] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182c00 00:17:26.205 [2024-12-09 11:56:34.221646] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.221669] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.221674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.221679] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221686] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.221714] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.221718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.221723] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221730] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.221755] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.221760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.221765] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221772] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.221795] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.221799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.221805] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221816] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.221841] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.221846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.221850] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221857] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.221883] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.221888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.221892] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221899] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.221923] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.221928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.221932] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221939] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.221966] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.221970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.221975] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221982] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.221988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222005] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222014] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222021] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222045] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222057] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222064] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222087] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222096] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222103] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222130] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222139] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222146] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222168] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222177] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222184] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222209] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222219] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222226] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222251] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222260] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222268] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222295] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222304] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222311] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222337] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222347] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222354] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222378] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.206 [2024-12-09 11:56:34.222383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:26.206 [2024-12-09 11:56:34.222387] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222394] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.206 [2024-12-09 11:56:34.222401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.206 [2024-12-09 11:56:34.222421] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222430] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222437] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.222467] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222476] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222483] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.222511] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222520] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222527] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.222551] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222560] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222567] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.222592] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222601] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222608] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.222630] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222639] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222646] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.222671] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222680] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222687] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.222710] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222719] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222727] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.222750] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222759] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222766] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.222774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.222791] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.222796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.222801] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.226812] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.226820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.207 [2024-12-09 11:56:34.226840] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.207 [2024-12-09 11:56:34.226845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0014 p:0 m:0 dnr:0 00:17:26.207 [2024-12-09 11:56:34.226850] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182c00 00:17:26.207 [2024-12-09 11:56:34.226856] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:17:26.470 128 00:17:26.470 Transport Service Identifier: 4420 00:17:26.470 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:26.470 Transport Address: 192.168.100.8 00:17:26.470 Transport Specific Address Subtype - RDMA 00:17:26.470 RDMA QP Service Type: 1 (Reliable Connected) 00:17:26.470 RDMA Provider Type: 1 (No provider specified) 00:17:26.470 RDMA CM Service: 1 (RDMA_CM) 00:17:26.470 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:26.470 [2024-12-09 11:56:34.298001] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:17:26.470 [2024-12-09 11:56:34.298034] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261305 ] 00:17:26.470 [2024-12-09 11:56:34.357939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:26.470 [2024-12-09 11:56:34.358007] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:26.470 [2024-12-09 11:56:34.358017] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:26.470 [2024-12-09 11:56:34.358020] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:26.470 [2024-12-09 11:56:34.358044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:26.470 [2024-12-09 11:56:34.368211] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:26.470 [2024-12-09 11:56:34.379422] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:26.470 [2024-12-09 11:56:34.379438] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:26.470 [2024-12-09 11:56:34.379444] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379449] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379454] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379461] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379466] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379470] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379474] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379479] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379483] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379488] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379492] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379496] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379501] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379505] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379509] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379514] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379518] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379523] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379527] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182c00 00:17:26.470 [2024-12-09 11:56:34.379531] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379536] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379541] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379545] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379549] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379554] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379558] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379562] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379567] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379571] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379575] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379580] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379584] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:26.471 [2024-12-09 11:56:34.379588] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:26.471 [2024-12-09 11:56:34.379591] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:26.471 [2024-12-09 11:56:34.379608] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.379620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x182c00 00:17:26.471 [2024-12-09 11:56:34.383812] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.471 [2024-12-09 11:56:34.383820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:26.471 [2024-12-09 11:56:34.383825] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.383831] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:26.471 [2024-12-09 11:56:34.383837] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:26.471 [2024-12-09 11:56:34.383841] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:26.471 [2024-12-09 11:56:34.383852] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.383859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.471 [2024-12-09 11:56:34.383877] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.471 [2024-12-09 11:56:34.383882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:26.471 [2024-12-09 11:56:34.383886] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:26.471 [2024-12-09 11:56:34.383891] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.383896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:26.471 [2024-12-09 11:56:34.383902] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.383907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.471 [2024-12-09 11:56:34.383928] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.471 [2024-12-09 11:56:34.383932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:26.471 [2024-12-09 11:56:34.383937] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:26.471 [2024-12-09 11:56:34.383941] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.383947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:26.471 [2024-12-09 11:56:34.383952] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.383958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.471 [2024-12-09 11:56:34.383977] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.471 [2024-12-09 11:56:34.383981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:26.471 [2024-12-09 11:56:34.383986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:26.471 [2024-12-09 11:56:34.383990] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.383997] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.471 [2024-12-09 11:56:34.384020] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.471 [2024-12-09 11:56:34.384024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:26.471 [2024-12-09 11:56:34.384029] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:26.471 [2024-12-09 11:56:34.384033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:26.471 [2024-12-09 11:56:34.384037] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:26.471 [2024-12-09 11:56:34.384146] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:26.471 [2024-12-09 11:56:34.384151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:26.471 [2024-12-09 11:56:34.384157] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.471 [2024-12-09 11:56:34.384179] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.471 [2024-12-09 11:56:34.384184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:26.471 [2024-12-09 11:56:34.384188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:26.471 [2024-12-09 11:56:34.384192] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384199] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.471 [2024-12-09 11:56:34.384220] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.471 [2024-12-09 11:56:34.384224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:26.471 [2024-12-09 11:56:34.384228] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:26.471 [2024-12-09 11:56:34.384233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:26.471 [2024-12-09 11:56:34.384237] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384242] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:26.471 [2024-12-09 11:56:34.384248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:26.471 [2024-12-09 11:56:34.384256] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182c00 00:17:26.471 [2024-12-09 11:56:34.384304] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.471 [2024-12-09 11:56:34.384310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:26.471 [2024-12-09 11:56:34.384316] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:26.471 [2024-12-09 11:56:34.384321] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:26.471 [2024-12-09 11:56:34.384325] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:26.471 [2024-12-09 11:56:34.384329] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:26.471 [2024-12-09 11:56:34.384334] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:26.471 [2024-12-09 11:56:34.384339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:26.471 [2024-12-09 11:56:34.384343] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:26.471 [2024-12-09 11:56:34.384354] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.471 [2024-12-09 11:56:34.384384] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.471 [2024-12-09 11:56:34.384388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:26.471 [2024-12-09 11:56:34.384396] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x182c00 00:17:26.471 [2024-12-09 11:56:34.384402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.471 [2024-12-09 11:56:34.384407] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.472 [2024-12-09 11:56:34.384417] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.472 [2024-12-09 11:56:34.384428] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.472 [2024-12-09 11:56:34.384437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384441] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384453] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.472 [2024-12-09 11:56:34.384480] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.384485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.384492] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:26.472 [2024-12-09 11:56:34.384497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384501] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384517] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.472 [2024-12-09 11:56:34.384548] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.384552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.384601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384606] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384619] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ca000 len:0x1000 key:0x182c00 00:17:26.472 [2024-12-09 11:56:34.384650] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.384654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.384664] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:26.472 [2024-12-09 11:56:34.384673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384678] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384691] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182c00 00:17:26.472 [2024-12-09 11:56:34.384728] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.384732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.384744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384749] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384762] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182c00 00:17:26.472 [2024-12-09 11:56:34.384797] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.384801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.384816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384821] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384852] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:26.472 [2024-12-09 11:56:34.384856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:26.472 [2024-12-09 11:56:34.384861] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:26.472 [2024-12-09 11:56:34.384871] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.472 [2024-12-09 11:56:34.384883] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.472 [2024-12-09 11:56:34.384897] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.384901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.384906] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384913] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.472 [2024-12-09 11:56:34.384924] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.384928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.384933] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384947] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.384952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.384956] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384963] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.384968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.472 [2024-12-09 11:56:34.384987] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.384991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.384995] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.385002] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.385008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.472 [2024-12-09 11:56:34.385024] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.472 [2024-12-09 11:56:34.385028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:17:26.472 [2024-12-09 11:56:34.385033] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.385043] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.385050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x2000 key:0x182c00 00:17:26.472 [2024-12-09 11:56:34.385056] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.385062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x200 key:0x182c00 00:17:26.472 [2024-12-09 11:56:34.385068] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x182c00 00:17:26.472 [2024-12-09 11:56:34.385074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x200 key:0x182c00 00:17:26.473 [2024-12-09 11:56:34.385083] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x182c00 00:17:26.473 [2024-12-09 11:56:34.385088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c5000 len:0x1000 key:0x182c00 00:17:26.473 [2024-12-09 11:56:34.385095] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.473 [2024-12-09 11:56:34.385099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:26.473 [2024-12-09 11:56:34.385108] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182c00 00:17:26.473 [2024-12-09 11:56:34.385117] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.473 [2024-12-09 11:56:34.385122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:26.473 [2024-12-09 11:56:34.385130] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182c00 00:17:26.473 [2024-12-09 11:56:34.385136] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.473 [2024-12-09 11:56:34.385140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:26.473 [2024-12-09 11:56:34.385145] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182c00 00:17:26.473 [2024-12-09 11:56:34.385156] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.473 [2024-12-09 11:56:34.385160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:26.473 [2024-12-09 11:56:34.385167] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182c00 00:17:26.473 ===================================================== 00:17:26.473 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:26.473 ===================================================== 00:17:26.473 Controller Capabilities/Features 00:17:26.473 ================================ 00:17:26.473 Vendor ID: 8086 00:17:26.473 Subsystem Vendor ID: 8086 00:17:26.473 Serial Number: SPDK00000000000001 00:17:26.473 Model Number: SPDK bdev Controller 00:17:26.473 Firmware Version: 25.01 00:17:26.473 Recommended Arb Burst: 6 00:17:26.473 IEEE OUI Identifier: e4 d2 5c 00:17:26.473 Multi-path I/O 00:17:26.473 May have multiple subsystem ports: Yes 00:17:26.473 May have multiple controllers: Yes 00:17:26.473 Associated with SR-IOV VF: No 00:17:26.473 Max Data Transfer Size: 131072 00:17:26.473 Max Number of Namespaces: 32 00:17:26.473 Max Number of I/O Queues: 127 00:17:26.473 NVMe Specification Version (VS): 1.3 00:17:26.473 NVMe Specification Version (Identify): 1.3 00:17:26.473 Maximum Queue Entries: 128 00:17:26.473 Contiguous Queues Required: Yes 00:17:26.473 Arbitration Mechanisms Supported 00:17:26.473 Weighted Round Robin: Not Supported 00:17:26.473 Vendor Specific: Not Supported 00:17:26.473 Reset Timeout: 15000 ms 00:17:26.473 Doorbell Stride: 4 bytes 00:17:26.473 NVM Subsystem Reset: Not Supported 00:17:26.473 Command Sets Supported 00:17:26.473 NVM Command Set: Supported 00:17:26.473 Boot Partition: Not Supported 00:17:26.473 Memory Page Size Minimum: 4096 bytes 00:17:26.473 Memory Page Size Maximum: 4096 bytes 00:17:26.473 Persistent Memory Region: Not Supported 00:17:26.473 Optional Asynchronous Events Supported 00:17:26.473 Namespace Attribute Notices: Supported 00:17:26.473 Firmware Activation Notices: Not Supported 00:17:26.473 ANA Change Notices: Not Supported 00:17:26.473 PLE Aggregate Log Change Notices: Not Supported 00:17:26.473 LBA Status Info Alert Notices: Not Supported 00:17:26.473 EGE Aggregate Log Change Notices: Not Supported 00:17:26.473 Normal NVM Subsystem Shutdown event: Not Supported 00:17:26.473 Zone Descriptor Change Notices: Not Supported 00:17:26.473 Discovery Log Change Notices: Not Supported 00:17:26.473 Controller Attributes 00:17:26.473 128-bit Host Identifier: Supported 00:17:26.473 Non-Operational Permissive Mode: Not Supported 00:17:26.473 NVM Sets: Not Supported 00:17:26.473 Read Recovery Levels: Not Supported 00:17:26.473 Endurance Groups: Not Supported 00:17:26.473 Predictable Latency Mode: Not Supported 00:17:26.473 Traffic Based Keep ALive: Not Supported 00:17:26.473 Namespace Granularity: Not Supported 00:17:26.473 SQ Associations: Not Supported 00:17:26.473 UUID List: Not Supported 00:17:26.473 Multi-Domain Subsystem: Not Supported 00:17:26.473 Fixed Capacity Management: Not Supported 00:17:26.473 Variable Capacity Management: Not Supported 00:17:26.473 Delete Endurance Group: Not Supported 00:17:26.473 Delete NVM Set: Not Supported 00:17:26.473 Extended LBA Formats Supported: Not Supported 00:17:26.473 Flexible Data Placement Supported: Not Supported 00:17:26.473 00:17:26.473 Controller Memory Buffer Support 00:17:26.473 ================================ 00:17:26.473 Supported: No 00:17:26.473 00:17:26.473 Persistent Memory Region Support 00:17:26.473 ================================ 00:17:26.473 Supported: No 00:17:26.473 00:17:26.473 Admin Command Set Attributes 00:17:26.473 ============================ 00:17:26.473 Security Send/Receive: Not Supported 00:17:26.473 Format NVM: Not Supported 00:17:26.473 Firmware Activate/Download: Not Supported 00:17:26.473 Namespace Management: Not Supported 00:17:26.473 Device Self-Test: Not Supported 00:17:26.473 Directives: Not Supported 00:17:26.473 NVMe-MI: Not Supported 00:17:26.473 Virtualization Management: Not Supported 00:17:26.473 Doorbell Buffer Config: Not Supported 00:17:26.473 Get LBA Status Capability: Not Supported 00:17:26.473 Command & Feature Lockdown Capability: Not Supported 00:17:26.473 Abort Command Limit: 4 00:17:26.473 Async Event Request Limit: 4 00:17:26.473 Number of Firmware Slots: N/A 00:17:26.473 Firmware Slot 1 Read-Only: N/A 00:17:26.473 Firmware Activation Without Reset: N/A 00:17:26.473 Multiple Update Detection Support: N/A 00:17:26.473 Firmware Update Granularity: No Information Provided 00:17:26.473 Per-Namespace SMART Log: No 00:17:26.473 Asymmetric Namespace Access Log Page: Not Supported 00:17:26.473 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:26.473 Command Effects Log Page: Supported 00:17:26.473 Get Log Page Extended Data: Supported 00:17:26.473 Telemetry Log Pages: Not Supported 00:17:26.473 Persistent Event Log Pages: Not Supported 00:17:26.473 Supported Log Pages Log Page: May Support 00:17:26.473 Commands Supported & Effects Log Page: Not Supported 00:17:26.473 Feature Identifiers & Effects Log Page:May Support 00:17:26.473 NVMe-MI Commands & Effects Log Page: May Support 00:17:26.473 Data Area 4 for Telemetry Log: Not Supported 00:17:26.473 Error Log Page Entries Supported: 128 00:17:26.473 Keep Alive: Supported 00:17:26.473 Keep Alive Granularity: 10000 ms 00:17:26.473 00:17:26.473 NVM Command Set Attributes 00:17:26.473 ========================== 00:17:26.473 Submission Queue Entry Size 00:17:26.473 Max: 64 00:17:26.473 Min: 64 00:17:26.473 Completion Queue Entry Size 00:17:26.473 Max: 16 00:17:26.473 Min: 16 00:17:26.473 Number of Namespaces: 32 00:17:26.473 Compare Command: Supported 00:17:26.473 Write Uncorrectable Command: Not Supported 00:17:26.473 Dataset Management Command: Supported 00:17:26.473 Write Zeroes Command: Supported 00:17:26.473 Set Features Save Field: Not Supported 00:17:26.473 Reservations: Supported 00:17:26.473 Timestamp: Not Supported 00:17:26.473 Copy: Supported 00:17:26.473 Volatile Write Cache: Present 00:17:26.473 Atomic Write Unit (Normal): 1 00:17:26.473 Atomic Write Unit (PFail): 1 00:17:26.473 Atomic Compare & Write Unit: 1 00:17:26.473 Fused Compare & Write: Supported 00:17:26.473 Scatter-Gather List 00:17:26.473 SGL Command Set: Supported 00:17:26.473 SGL Keyed: Supported 00:17:26.473 SGL Bit Bucket Descriptor: Not Supported 00:17:26.473 SGL Metadata Pointer: Not Supported 00:17:26.473 Oversized SGL: Not Supported 00:17:26.473 SGL Metadata Address: Not Supported 00:17:26.473 SGL Offset: Supported 00:17:26.473 Transport SGL Data Block: Not Supported 00:17:26.473 Replay Protected Memory Block: Not Supported 00:17:26.473 00:17:26.473 Firmware Slot Information 00:17:26.473 ========================= 00:17:26.473 Active slot: 1 00:17:26.473 Slot 1 Firmware Revision: 25.01 00:17:26.473 00:17:26.473 00:17:26.473 Commands Supported and Effects 00:17:26.473 ============================== 00:17:26.473 Admin Commands 00:17:26.473 -------------- 00:17:26.473 Get Log Page (02h): Supported 00:17:26.473 Identify (06h): Supported 00:17:26.473 Abort (08h): Supported 00:17:26.473 Set Features (09h): Supported 00:17:26.473 Get Features (0Ah): Supported 00:17:26.473 Asynchronous Event Request (0Ch): Supported 00:17:26.473 Keep Alive (18h): Supported 00:17:26.473 I/O Commands 00:17:26.473 ------------ 00:17:26.473 Flush (00h): Supported LBA-Change 00:17:26.473 Write (01h): Supported LBA-Change 00:17:26.473 Read (02h): Supported 00:17:26.473 Compare (05h): Supported 00:17:26.473 Write Zeroes (08h): Supported LBA-Change 00:17:26.473 Dataset Management (09h): Supported LBA-Change 00:17:26.473 Copy (19h): Supported LBA-Change 00:17:26.473 00:17:26.473 Error Log 00:17:26.473 ========= 00:17:26.473 00:17:26.473 Arbitration 00:17:26.474 =========== 00:17:26.474 Arbitration Burst: 1 00:17:26.474 00:17:26.474 Power Management 00:17:26.474 ================ 00:17:26.474 Number of Power States: 1 00:17:26.474 Current Power State: Power State #0 00:17:26.474 Power State #0: 00:17:26.474 Max Power: 0.00 W 00:17:26.474 Non-Operational State: Operational 00:17:26.474 Entry Latency: Not Reported 00:17:26.474 Exit Latency: Not Reported 00:17:26.474 Relative Read Throughput: 0 00:17:26.474 Relative Read Latency: 0 00:17:26.474 Relative Write Throughput: 0 00:17:26.474 Relative Write Latency: 0 00:17:26.474 Idle Power: Not Reported 00:17:26.474 Active Power: Not Reported 00:17:26.474 Non-Operational Permissive Mode: Not Supported 00:17:26.474 00:17:26.474 Health Information 00:17:26.474 ================== 00:17:26.474 Critical Warnings: 00:17:26.474 Available Spare Space: OK 00:17:26.474 Temperature: OK 00:17:26.474 Device Reliability: OK 00:17:26.474 Read Only: No 00:17:26.474 Volatile Memory Backup: OK 00:17:26.474 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:26.474 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:26.474 Available Spare: 0% 00:17:26.474 Available Spare Threshold: 0% 00:17:26.474 Life Percentage [2024-12-09 11:56:34.385243] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385277] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385286] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385308] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:26.474 [2024-12-09 11:56:34.385315] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 27961 doesn't match qid 00:17:26.474 [2024-12-09 11:56:34.385327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32704 cdw0:91299d0 sqhd:c880 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385332] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 27961 doesn't match qid 00:17:26.474 [2024-12-09 11:56:34.385338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32704 cdw0:91299d0 sqhd:c880 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385342] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 27961 doesn't match qid 00:17:26.474 [2024-12-09 11:56:34.385348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32704 cdw0:91299d0 sqhd:c880 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385352] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 27961 doesn't match qid 00:17:26.474 [2024-12-09 11:56:34.385358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32704 cdw0:91299d0 sqhd:c880 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385365] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385390] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385400] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385411] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385433] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385442] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:26.474 [2024-12-09 11:56:34.385448] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:26.474 [2024-12-09 11:56:34.385452] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385459] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385483] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385492] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385499] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385526] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385536] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385544] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385573] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385582] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385589] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385613] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385623] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385630] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385654] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385663] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385671] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385694] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385704] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385711] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385736] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385744] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385751] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385778] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385787] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385795] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385822] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385831] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385838] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.474 [2024-12-09 11:56:34.385844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.474 [2024-12-09 11:56:34.385861] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.474 [2024-12-09 11:56:34.385866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:26.474 [2024-12-09 11:56:34.385870] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.385877] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.385883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.385901] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.385905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.385909] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.385916] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.385922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.385943] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.385947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.385952] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.385958] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.385964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.385988] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.385993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.385997] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386004] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386031] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386040] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386046] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386073] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386082] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386089] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386115] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386124] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386131] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386161] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386170] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386177] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386203] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386212] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386219] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386242] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386250] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386257] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386285] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386294] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386301] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386328] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386336] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386343] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386370] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386378] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386385] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386409] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386418] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386424] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386448] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386456] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386463] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.475 [2024-12-09 11:56:34.386491] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.475 [2024-12-09 11:56:34.386496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:26.475 [2024-12-09 11:56:34.386500] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182c00 00:17:26.475 [2024-12-09 11:56:34.386507] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386529] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386537] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386544] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386569] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386578] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386585] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386613] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386622] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386629] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386652] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386661] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386669] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386695] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386704] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386711] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386738] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386746] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386753] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386783] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386792] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386798] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386824] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386833] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386840] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386863] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386872] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386879] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386905] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386914] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386922] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386943] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386952] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386959] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.386982] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.386986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.386991] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.386998] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.387022] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.387027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.387031] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387038] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.387063] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.387067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.387072] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387079] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.387106] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.387111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.387115] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387122] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.387145] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.387150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.387156] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387163] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.387184] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.387188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.387193] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387200] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.387221] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.476 [2024-12-09 11:56:34.387226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:26.476 [2024-12-09 11:56:34.387230] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387237] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.476 [2024-12-09 11:56:34.387243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.476 [2024-12-09 11:56:34.387260] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387269] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387276] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387302] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387311] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387318] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387339] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387348] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387355] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387386] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387396] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387403] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387426] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387435] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387442] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387468] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387477] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387484] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387508] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387517] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387524] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387552] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387561] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387568] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387594] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387603] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387610] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387636] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387648] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387655] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387678] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387687] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387693] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387720] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387729] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387735] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387757] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.387766] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387772] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.387778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.387802] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.387806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.391817] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.391825] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.391831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:26.477 [2024-12-09 11:56:34.391854] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:26.477 [2024-12-09 11:56:34.391858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0014 p:0 m:0 dnr:0 00:17:26.477 [2024-12-09 11:56:34.391863] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182c00 00:17:26.477 [2024-12-09 11:56:34.391868] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:17:26.477 Used: 0% 00:17:26.477 Data Units Read: 0 00:17:26.477 Data Units Written: 0 00:17:26.477 Host Read Commands: 0 00:17:26.477 Host Write Commands: 0 00:17:26.477 Controller Busy Time: 0 minutes 00:17:26.477 Power Cycles: 0 00:17:26.477 Power On Hours: 0 hours 00:17:26.477 Unsafe Shutdowns: 0 00:17:26.477 Unrecoverable Media Errors: 0 00:17:26.477 Lifetime Error Log Entries: 0 00:17:26.477 Warning Temperature Time: 0 minutes 00:17:26.477 Critical Temperature Time: 0 minutes 00:17:26.477 00:17:26.477 Number of Queues 00:17:26.477 ================ 00:17:26.477 Number of I/O Submission Queues: 127 00:17:26.477 Number of I/O Completion Queues: 127 00:17:26.477 00:17:26.477 Active Namespaces 00:17:26.477 ================= 00:17:26.477 Namespace ID:1 00:17:26.477 Error Recovery Timeout: Unlimited 00:17:26.477 Command Set Identifier: NVM (00h) 00:17:26.477 Deallocate: Supported 00:17:26.477 Deallocated/Unwritten Error: Not Supported 00:17:26.477 Deallocated Read Value: Unknown 00:17:26.477 Deallocate in Write Zeroes: Not Supported 00:17:26.477 Deallocated Guard Field: 0xFFFF 00:17:26.477 Flush: Supported 00:17:26.477 Reservation: Supported 00:17:26.477 Namespace Sharing Capabilities: Multiple Controllers 00:17:26.477 Size (in LBAs): 131072 (0GiB) 00:17:26.477 Capacity (in LBAs): 131072 (0GiB) 00:17:26.477 Utilization (in LBAs): 131072 (0GiB) 00:17:26.477 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:26.477 EUI64: ABCDEF0123456789 00:17:26.477 UUID: b7212b5f-a0dd-4cbe-8822-8b1b7c586f88 00:17:26.477 Thin Provisioning: Not Supported 00:17:26.478 Per-NS Atomic Units: Yes 00:17:26.478 Atomic Boundary Size (Normal): 0 00:17:26.478 Atomic Boundary Size (PFail): 0 00:17:26.478 Atomic Boundary Offset: 0 00:17:26.478 Maximum Single Source Range Length: 65535 00:17:26.478 Maximum Copy Length: 65535 00:17:26.478 Maximum Source Range Count: 1 00:17:26.478 NGUID/EUI64 Never Reused: No 00:17:26.478 Namespace Write Protected: No 00:17:26.478 Number of LBA Formats: 1 00:17:26.478 Current LBA Format: LBA Format #00 00:17:26.478 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:26.478 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:26.478 rmmod nvme_rdma 00:17:26.478 rmmod nvme_fabrics 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3261152 ']' 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3261152 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3261152 ']' 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3261152 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.478 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3261152 00:17:26.736 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.736 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.736 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3261152' 00:17:26.736 killing process with pid 3261152 00:17:26.736 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3261152 00:17:26.736 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3261152 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:26.995 00:17:26.995 real 0m7.380s 00:17:26.995 user 0m6.073s 00:17:26.995 sys 0m4.871s 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.995 ************************************ 00:17:26.995 END TEST nvmf_identify 00:17:26.995 ************************************ 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.995 ************************************ 00:17:26.995 START TEST nvmf_perf 00:17:26.995 ************************************ 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:26.995 * Looking for test storage... 00:17:26.995 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:26.995 11:56:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:27.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.254 --rc genhtml_branch_coverage=1 00:17:27.254 --rc genhtml_function_coverage=1 00:17:27.254 --rc genhtml_legend=1 00:17:27.254 --rc geninfo_all_blocks=1 00:17:27.254 --rc geninfo_unexecuted_blocks=1 00:17:27.254 00:17:27.254 ' 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:27.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.254 --rc genhtml_branch_coverage=1 00:17:27.254 --rc genhtml_function_coverage=1 00:17:27.254 --rc genhtml_legend=1 00:17:27.254 --rc geninfo_all_blocks=1 00:17:27.254 --rc geninfo_unexecuted_blocks=1 00:17:27.254 00:17:27.254 ' 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:27.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.254 --rc genhtml_branch_coverage=1 00:17:27.254 --rc genhtml_function_coverage=1 00:17:27.254 --rc genhtml_legend=1 00:17:27.254 --rc geninfo_all_blocks=1 00:17:27.254 --rc geninfo_unexecuted_blocks=1 00:17:27.254 00:17:27.254 ' 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:27.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.254 --rc genhtml_branch_coverage=1 00:17:27.254 --rc genhtml_function_coverage=1 00:17:27.254 --rc genhtml_legend=1 00:17:27.254 --rc geninfo_all_blocks=1 00:17:27.254 --rc geninfo_unexecuted_blocks=1 00:17:27.254 00:17:27.254 ' 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.254 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.255 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.255 11:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.821 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:33.822 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:33.822 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:33.822 Found net devices under 0000:da:00.0: mlx_0_0 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:33.822 Found net devices under 0000:da:00.1: mlx_0_1 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:33.822 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.822 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:17:33.822 altname enp218s0f0np0 00:17:33.822 altname ens818f0np0 00:17:33.822 inet 192.168.100.8/24 scope global mlx_0_0 00:17:33.822 valid_lft forever preferred_lft forever 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:33.822 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.822 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:17:33.822 altname enp218s0f1np1 00:17:33.822 altname ens818f1np1 00:17:33.822 inet 192.168.100.9/24 scope global mlx_0_1 00:17:33.822 valid_lft forever preferred_lft forever 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.822 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:33.823 192.168.100.9' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:33.823 192.168.100.9' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:33.823 192.168.100.9' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3264596 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3264596 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3264596 ']' 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.823 11:56:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 [2024-12-09 11:56:41.013087] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:17:33.823 [2024-12-09 11:56:41.013129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.823 [2024-12-09 11:56:41.089263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.823 [2024-12-09 11:56:41.131213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.823 [2024-12-09 11:56:41.131250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.823 [2024-12-09 11:56:41.131257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.823 [2024-12-09 11:56:41.131262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.823 [2024-12-09 11:56:41.131268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.823 [2024-12-09 11:56:41.132680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.823 [2024-12-09 11:56:41.132790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.823 [2024-12-09 11:56:41.132919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.823 [2024-12-09 11:56:41.132920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.823 11:56:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.823 11:56:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:33.823 11:56:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.823 11:56:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.823 11:56:41 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 11:56:41 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.823 11:56:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:33.823 11:56:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:36.355 11:56:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:36.355 11:56:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:36.614 11:56:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:17:36.614 11:56:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:36.872 11:56:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:36.872 11:56:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:17:36.872 11:56:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:36.873 11:56:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:17:36.873 11:56:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:17:36.873 [2024-12-09 11:56:44.893176] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:17:36.873 [2024-12-09 11:56:44.914993] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7954a0/0x66add0) succeed. 00:17:37.132 [2024-12-09 11:56:44.926303] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x654e80/0x6eaa80) succeed. 00:17:37.132 11:56:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.390 11:56:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:37.390 11:56:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.390 11:56:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:37.390 11:56:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:37.648 11:56:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:37.906 [2024-12-09 11:56:45.797905] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:37.906 11:56:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:38.164 11:56:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:17:38.164 11:56:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:17:38.164 11:56:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:38.164 11:56:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:17:39.544 Initializing NVMe Controllers 00:17:39.544 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:17:39.544 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:17:39.544 Initialization complete. Launching workers. 00:17:39.544 ======================================================== 00:17:39.544 Latency(us) 00:17:39.544 Device Information : IOPS MiB/s Average min max 00:17:39.544 PCIE (0000:5e:00.0) NSID 1 from core 0: 98444.13 384.55 325.86 24.38 4674.83 00:17:39.544 ======================================================== 00:17:39.544 Total : 98444.13 384.55 325.86 24.38 4674.83 00:17:39.544 00:17:39.544 11:56:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:42.850 Initializing NVMe Controllers 00:17:42.850 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:42.850 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:42.850 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:42.850 Initialization complete. Launching workers. 00:17:42.850 ======================================================== 00:17:42.850 Latency(us) 00:17:42.850 Device Information : IOPS MiB/s Average min max 00:17:42.850 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6428.99 25.11 154.70 49.71 6057.88 00:17:42.850 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5031.99 19.66 197.72 71.08 6083.96 00:17:42.850 ======================================================== 00:17:42.850 Total : 11460.99 44.77 173.59 49.71 6083.96 00:17:42.850 00:17:42.850 11:56:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:46.134 Initializing NVMe Controllers 00:17:46.134 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.134 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:46.134 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:46.134 Initialization complete. Launching workers. 00:17:46.134 ======================================================== 00:17:46.134 Latency(us) 00:17:46.134 Device Information : IOPS MiB/s Average min max 00:17:46.134 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17543.91 68.53 1821.45 536.66 6249.33 00:17:46.134 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4017.78 15.69 7956.92 5891.32 9121.89 00:17:46.134 ======================================================== 00:17:46.134 Total : 21561.69 84.23 2964.73 536.66 9121.89 00:17:46.134 00:17:46.134 11:56:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:17:46.134 11:56:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:51.407 Initializing NVMe Controllers 00:17:51.407 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.407 Controller IO queue size 128, less than required. 00:17:51.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:51.407 Controller IO queue size 128, less than required. 00:17:51.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:51.407 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:51.407 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:51.407 Initialization complete. Launching workers. 00:17:51.407 ======================================================== 00:17:51.407 Latency(us) 00:17:51.407 Device Information : IOPS MiB/s Average min max 00:17:51.407 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3766.35 941.59 34234.04 15507.18 85127.51 00:17:51.407 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3899.31 974.83 32381.03 15462.60 57394.18 00:17:51.407 ======================================================== 00:17:51.407 Total : 7665.66 1916.41 33291.46 15462.60 85127.51 00:17:51.407 00:17:51.407 11:56:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:17:51.407 No valid NVMe controllers or AIO or URING devices found 00:17:51.407 Initializing NVMe Controllers 00:17:51.407 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.407 Controller IO queue size 128, less than required. 00:17:51.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:51.407 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:51.407 Controller IO queue size 128, less than required. 00:17:51.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:51.407 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:51.407 WARNING: Some requested NVMe devices were skipped 00:17:51.407 11:56:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:17:55.598 Initializing NVMe Controllers 00:17:55.598 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.598 Controller IO queue size 128, less than required. 00:17:55.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:55.598 Controller IO queue size 128, less than required. 00:17:55.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:55.598 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:55.598 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:55.598 Initialization complete. Launching workers. 00:17:55.598 00:17:55.598 ==================== 00:17:55.598 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:55.598 RDMA transport: 00:17:55.598 dev name: mlx5_0 00:17:55.598 polls: 391226 00:17:55.598 idle_polls: 387931 00:17:55.598 completions: 42410 00:17:55.598 queued_requests: 1 00:17:55.598 total_send_wrs: 21205 00:17:55.598 send_doorbell_updates: 3058 00:17:55.598 total_recv_wrs: 21332 00:17:55.598 recv_doorbell_updates: 3059 00:17:55.598 --------------------------------- 00:17:55.598 00:17:55.598 ==================== 00:17:55.599 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:55.599 RDMA transport: 00:17:55.599 dev name: mlx5_0 00:17:55.599 polls: 394895 00:17:55.599 idle_polls: 394634 00:17:55.599 completions: 19362 00:17:55.599 queued_requests: 1 00:17:55.599 total_send_wrs: 9681 00:17:55.599 send_doorbell_updates: 251 00:17:55.599 total_recv_wrs: 9808 00:17:55.599 recv_doorbell_updates: 252 00:17:55.599 --------------------------------- 00:17:55.599 ======================================================== 00:17:55.599 Latency(us) 00:17:55.599 Device Information : IOPS MiB/s Average min max 00:17:55.599 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5292.80 1323.20 24186.31 11061.81 72180.63 00:17:55.599 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2416.26 604.06 52586.44 31529.42 81403.68 00:17:55.599 ======================================================== 00:17:55.599 Total : 7709.06 1927.27 33087.79 11061.81 81403.68 00:17:55.599 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:55.599 rmmod nvme_rdma 00:17:55.599 rmmod nvme_fabrics 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3264596 ']' 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3264596 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3264596 ']' 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3264596 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3264596 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3264596' 00:17:55.599 killing process with pid 3264596 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3264596 00:17:55.599 11:57:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3264596 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:58.132 00:17:58.132 real 0m30.742s 00:17:58.132 user 1m39.915s 00:17:58.132 sys 0m5.752s 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:58.132 ************************************ 00:17:58.132 END TEST nvmf_perf 00:17:58.132 ************************************ 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.132 ************************************ 00:17:58.132 START TEST nvmf_fio_host 00:17:58.132 ************************************ 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:17:58.132 * Looking for test storage... 00:17:58.132 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.132 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:58.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.133 --rc genhtml_branch_coverage=1 00:17:58.133 --rc genhtml_function_coverage=1 00:17:58.133 --rc genhtml_legend=1 00:17:58.133 --rc geninfo_all_blocks=1 00:17:58.133 --rc geninfo_unexecuted_blocks=1 00:17:58.133 00:17:58.133 ' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:58.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.133 --rc genhtml_branch_coverage=1 00:17:58.133 --rc genhtml_function_coverage=1 00:17:58.133 --rc genhtml_legend=1 00:17:58.133 --rc geninfo_all_blocks=1 00:17:58.133 --rc geninfo_unexecuted_blocks=1 00:17:58.133 00:17:58.133 ' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:58.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.133 --rc genhtml_branch_coverage=1 00:17:58.133 --rc genhtml_function_coverage=1 00:17:58.133 --rc genhtml_legend=1 00:17:58.133 --rc geninfo_all_blocks=1 00:17:58.133 --rc geninfo_unexecuted_blocks=1 00:17:58.133 00:17:58.133 ' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:58.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.133 --rc genhtml_branch_coverage=1 00:17:58.133 --rc genhtml_function_coverage=1 00:17:58.133 --rc genhtml_legend=1 00:17:58.133 --rc geninfo_all_blocks=1 00:17:58.133 --rc geninfo_unexecuted_blocks=1 00:17:58.133 00:17:58.133 ' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.133 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:58.133 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:58.134 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.134 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.134 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.134 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:58.134 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:58.134 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:17:58.134 11:57:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:04.700 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:04.700 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:04.700 Found net devices under 0000:da:00.0: mlx_0_0 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:04.700 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:04.701 Found net devices under 0000:da:00.1: mlx_0_1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:04.701 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:04.701 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:18:04.701 altname enp218s0f0np0 00:18:04.701 altname ens818f0np0 00:18:04.701 inet 192.168.100.8/24 scope global mlx_0_0 00:18:04.701 valid_lft forever preferred_lft forever 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:04.701 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:04.701 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:18:04.701 altname enp218s0f1np1 00:18:04.701 altname ens818f1np1 00:18:04.701 inet 192.168.100.9/24 scope global mlx_0_1 00:18:04.701 valid_lft forever preferred_lft forever 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:04.701 192.168.100.9' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:04.701 192.168.100.9' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:04.701 192.168.100.9' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3272247 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.701 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3272247 00:18:04.702 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3272247 ']' 00:18:04.702 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.702 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.702 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.702 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.702 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.702 [2024-12-09 11:57:11.774545] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:18:04.702 [2024-12-09 11:57:11.774595] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.702 [2024-12-09 11:57:11.835491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.702 [2024-12-09 11:57:11.878561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.702 [2024-12-09 11:57:11.878595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.702 [2024-12-09 11:57:11.878602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.702 [2024-12-09 11:57:11.878608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.702 [2024-12-09 11:57:11.878614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.702 [2024-12-09 11:57:11.880200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.702 [2024-12-09 11:57:11.880307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.702 [2024-12-09 11:57:11.880414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.702 [2024-12-09 11:57:11.880415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.702 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.702 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:18:04.702 11:57:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:04.702 [2024-12-09 11:57:12.173985] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dd2940/0x1dd6e30) succeed. 00:18:04.702 [2024-12-09 11:57:12.185422] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dd3fd0/0x1e184d0) succeed. 00:18:04.702 11:57:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:04.702 11:57:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:04.702 11:57:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.702 11:57:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:04.702 Malloc1 00:18:04.702 11:57:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:04.961 11:57:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:04.961 11:57:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:05.219 [2024-12-09 11:57:13.167746] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:05.219 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:05.478 11:57:13 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:05.736 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:05.736 fio-3.35 00:18:05.736 Starting 1 thread 00:18:08.270 00:18:08.270 test: (groupid=0, jobs=1): err= 0: pid=3272727: Mon Dec 9 11:57:16 2024 00:18:08.270 read: IOPS=16.9k, BW=66.2MiB/s (69.4MB/s)(133MiB/2004msec) 00:18:08.270 slat (nsec): min=1382, max=34177, avg=1561.91, stdev=529.67 00:18:08.270 clat (usec): min=2112, max=6833, avg=3749.51, stdev=96.97 00:18:08.270 lat (usec): min=2134, max=6835, avg=3751.07, stdev=96.90 00:18:08.270 clat percentiles (usec): 00:18:08.270 | 1.00th=[ 3720], 5.00th=[ 3720], 10.00th=[ 3720], 20.00th=[ 3720], 00:18:08.270 | 30.00th=[ 3752], 40.00th=[ 3752], 50.00th=[ 3752], 60.00th=[ 3752], 00:18:08.270 | 70.00th=[ 3752], 80.00th=[ 3752], 90.00th=[ 3752], 95.00th=[ 3785], 00:18:08.270 | 99.00th=[ 3884], 99.50th=[ 4015], 99.90th=[ 4948], 99.95th=[ 6259], 00:18:08.270 | 99.99th=[ 6783] 00:18:08.270 bw ( KiB/s): min=66432, max=68584, per=100.00%, avg=67788.00, stdev=1006.27, samples=4 00:18:08.270 iops : min=16608, max=17146, avg=16947.00, stdev=251.57, samples=4 00:18:08.270 write: IOPS=17.0k, BW=66.4MiB/s (69.6MB/s)(133MiB/2004msec); 0 zone resets 00:18:08.270 slat (nsec): min=1404, max=20774, avg=1635.88, stdev=528.00 00:18:08.270 clat (usec): min=2145, max=6819, avg=3747.39, stdev=86.78 00:18:08.270 lat (usec): min=2157, max=6821, avg=3749.02, stdev=86.72 00:18:08.270 clat percentiles (usec): 00:18:08.270 | 1.00th=[ 3720], 5.00th=[ 3720], 10.00th=[ 3720], 20.00th=[ 3720], 00:18:08.270 | 30.00th=[ 3752], 40.00th=[ 3752], 50.00th=[ 3752], 60.00th=[ 3752], 00:18:08.270 | 70.00th=[ 3752], 80.00th=[ 3752], 90.00th=[ 3752], 95.00th=[ 3785], 00:18:08.270 | 99.00th=[ 3916], 99.50th=[ 3982], 99.90th=[ 4490], 99.95th=[ 5866], 00:18:08.270 | 99.99th=[ 6783] 00:18:08.270 bw ( KiB/s): min=66656, max=68840, per=100.00%, avg=67980.00, stdev=931.27, samples=4 00:18:08.270 iops : min=16664, max=17210, avg=16995.00, stdev=232.82, samples=4 00:18:08.270 lat (msec) : 4=99.50%, 10=0.50% 00:18:08.270 cpu : usr=99.55%, sys=0.05%, ctx=14, majf=0, minf=3 00:18:08.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:08.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.270 issued rwts: total=33963,34048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.270 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.270 00:18:08.270 Run status group 0 (all jobs): 00:18:08.270 READ: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=133MiB (139MB), run=2004-2004msec 00:18:08.270 WRITE: bw=66.4MiB/s (69.6MB/s), 66.4MiB/s-66.4MiB/s (69.6MB/s-69.6MB/s), io=133MiB (139MB), run=2004-2004msec 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:08.270 11:57:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:08.529 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:08.529 fio-3.35 00:18:08.529 Starting 1 thread 00:18:11.060 00:18:11.060 test: (groupid=0, jobs=1): err= 0: pid=3273302: Mon Dec 9 11:57:18 2024 00:18:11.060 read: IOPS=13.7k, BW=214MiB/s (224MB/s)(421MiB/1967msec) 00:18:11.060 slat (usec): min=2, max=110, avg= 2.66, stdev= 1.47 00:18:11.060 clat (usec): min=551, max=9608, avg=1710.96, stdev=1394.84 00:18:11.060 lat (usec): min=553, max=9628, avg=1713.62, stdev=1395.37 00:18:11.060 clat percentiles (usec): 00:18:11.060 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 955], 00:18:11.060 | 30.00th=[ 1020], 40.00th=[ 1106], 50.00th=[ 1221], 60.00th=[ 1352], 00:18:11.060 | 70.00th=[ 1483], 80.00th=[ 1680], 90.00th=[ 4293], 95.00th=[ 5145], 00:18:11.060 | 99.00th=[ 6783], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 8979], 00:18:11.060 | 99.99th=[ 9503] 00:18:11.060 bw ( KiB/s): min=104576, max=111360, per=49.50%, avg=108384.00, stdev=2810.90, samples=4 00:18:11.060 iops : min= 6536, max= 6960, avg=6774.00, stdev=175.68, samples=4 00:18:11.060 write: IOPS=7901, BW=123MiB/s (129MB/s)(221MiB/1788msec); 0 zone resets 00:18:11.060 slat (usec): min=26, max=121, avg=29.68, stdev= 7.90 00:18:11.060 clat (usec): min=4496, max=21370, avg=13159.64, stdev=2032.01 00:18:11.060 lat (usec): min=4523, max=21397, avg=13189.33, stdev=2031.54 00:18:11.060 clat percentiles (usec): 00:18:11.060 | 1.00th=[ 7373], 5.00th=[10290], 10.00th=[10945], 20.00th=[11600], 00:18:11.060 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13042], 60.00th=[13566], 00:18:11.060 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15664], 95.00th=[16712], 00:18:11.060 | 99.00th=[18482], 99.50th=[19268], 99.90th=[20317], 99.95th=[20841], 00:18:11.060 | 99.99th=[21103] 00:18:11.060 bw ( KiB/s): min=106080, max=116224, per=88.67%, avg=112088.00, stdev=4311.76, samples=4 00:18:11.060 iops : min= 6630, max= 7264, avg=7005.50, stdev=269.49, samples=4 00:18:11.060 lat (usec) : 750=1.35%, 1000=16.30% 00:18:11.060 lat (msec) : 2=38.48%, 4=2.20%, 10=8.39%, 20=33.22%, 50=0.05% 00:18:11.060 cpu : usr=97.11%, sys=1.30%, ctx=184, majf=0, minf=3 00:18:11.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:11.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:11.060 issued rwts: total=26918,14127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:11.060 00:18:11.060 Run status group 0 (all jobs): 00:18:11.060 READ: bw=214MiB/s (224MB/s), 214MiB/s-214MiB/s (224MB/s-224MB/s), io=421MiB (441MB), run=1967-1967msec 00:18:11.060 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=221MiB (231MB), run=1788-1788msec 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:11.060 11:57:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:11.060 rmmod nvme_rdma 00:18:11.060 rmmod nvme_fabrics 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3272247 ']' 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3272247 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3272247 ']' 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3272247 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3272247 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3272247' 00:18:11.060 killing process with pid 3272247 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3272247 00:18:11.060 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3272247 00:18:11.319 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:11.319 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:11.319 00:18:11.319 real 0m13.656s 00:18:11.319 user 0m47.754s 00:18:11.319 sys 0m5.279s 00:18:11.319 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.319 11:57:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.319 ************************************ 00:18:11.319 END TEST nvmf_fio_host 00:18:11.319 ************************************ 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.579 ************************************ 00:18:11.579 START TEST nvmf_failover 00:18:11.579 ************************************ 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:18:11.579 * Looking for test storage... 00:18:11.579 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.579 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:11.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.580 --rc genhtml_branch_coverage=1 00:18:11.580 --rc genhtml_function_coverage=1 00:18:11.580 --rc genhtml_legend=1 00:18:11.580 --rc geninfo_all_blocks=1 00:18:11.580 --rc geninfo_unexecuted_blocks=1 00:18:11.580 00:18:11.580 ' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:11.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.580 --rc genhtml_branch_coverage=1 00:18:11.580 --rc genhtml_function_coverage=1 00:18:11.580 --rc genhtml_legend=1 00:18:11.580 --rc geninfo_all_blocks=1 00:18:11.580 --rc geninfo_unexecuted_blocks=1 00:18:11.580 00:18:11.580 ' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:11.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.580 --rc genhtml_branch_coverage=1 00:18:11.580 --rc genhtml_function_coverage=1 00:18:11.580 --rc genhtml_legend=1 00:18:11.580 --rc geninfo_all_blocks=1 00:18:11.580 --rc geninfo_unexecuted_blocks=1 00:18:11.580 00:18:11.580 ' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:11.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.580 --rc genhtml_branch_coverage=1 00:18:11.580 --rc genhtml_function_coverage=1 00:18:11.580 --rc genhtml_legend=1 00:18:11.580 --rc geninfo_all_blocks=1 00:18:11.580 --rc geninfo_unexecuted_blocks=1 00:18:11.580 00:18:11.580 ' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.580 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:11.580 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:18:11.839 11:57:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:18.406 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:18.406 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:18.406 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:18.407 Found net devices under 0000:da:00.0: mlx_0_0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:18.407 Found net devices under 0000:da:00.1: mlx_0_1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:18.407 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:18.407 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:18:18.407 altname enp218s0f0np0 00:18:18.407 altname ens818f0np0 00:18:18.407 inet 192.168.100.8/24 scope global mlx_0_0 00:18:18.407 valid_lft forever preferred_lft forever 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:18.407 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:18.407 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:18:18.407 altname enp218s0f1np1 00:18:18.407 altname ens818f1np1 00:18:18.407 inet 192.168.100.9/24 scope global mlx_0_1 00:18:18.407 valid_lft forever preferred_lft forever 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:18.407 192.168.100.9' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:18.407 192.168.100.9' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:18.407 192.168.100.9' 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:18:18.407 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3276824 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3276824 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3276824 ']' 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:18.408 [2024-12-09 11:57:25.548469] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:18:18.408 [2024-12-09 11:57:25.548512] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.408 [2024-12-09 11:57:25.627038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:18.408 [2024-12-09 11:57:25.668006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.408 [2024-12-09 11:57:25.668042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.408 [2024-12-09 11:57:25.668049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.408 [2024-12-09 11:57:25.668055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.408 [2024-12-09 11:57:25.668061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.408 [2024-12-09 11:57:25.669420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.408 [2024-12-09 11:57:25.669525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.408 [2024-12-09 11:57:25.669526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.408 11:57:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:18.408 [2024-12-09 11:57:26.000395] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24df080/0x24e3570) succeed. 00:18:18.408 [2024-12-09 11:57:26.011381] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24e0670/0x2524c10) succeed. 00:18:18.408 11:57:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:18.408 Malloc0 00:18:18.408 11:57:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.666 11:57:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.924 11:57:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:18.924 [2024-12-09 11:57:26.908454] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:18.924 11:57:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:19.182 [2024-12-09 11:57:27.096801] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:19.182 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:19.441 [2024-12-09 11:57:27.317618] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3277208 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3277208 /var/tmp/bdevperf.sock 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3277208 ']' 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.441 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:19.700 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.700 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:19.700 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:19.958 NVMe0n1 00:18:19.959 11:57:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:20.217 00:18:20.217 11:57:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3277311 00:18:20.217 11:57:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.217 11:57:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:21.152 11:57:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:21.458 11:57:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:24.744 11:57:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:24.744 00:18:24.744 11:57:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:25.003 11:57:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:28.289 11:57:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:28.289 [2024-12-09 11:57:36.010191] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:28.289 11:57:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:29.225 11:57:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:29.225 11:57:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3277311 00:18:35.798 { 00:18:35.798 "results": [ 00:18:35.798 { 00:18:35.798 "job": "NVMe0n1", 00:18:35.798 "core_mask": "0x1", 00:18:35.798 "workload": "verify", 00:18:35.798 "status": "finished", 00:18:35.798 "verify_range": { 00:18:35.798 "start": 0, 00:18:35.798 "length": 16384 00:18:35.798 }, 00:18:35.798 "queue_depth": 128, 00:18:35.798 "io_size": 4096, 00:18:35.798 "runtime": 15.00502, 00:18:35.798 "iops": 13586.78628885533, 00:18:35.798 "mibps": 53.07338394084113, 00:18:35.798 "io_failed": 4277, 00:18:35.798 "io_timeout": 0, 00:18:35.798 "avg_latency_us": 9203.868959606614, 00:18:35.798 "min_latency_us": 337.4323809523809, 00:18:35.798 "max_latency_us": 1046578.7123809524 00:18:35.798 } 00:18:35.798 ], 00:18:35.798 "core_count": 1 00:18:35.798 } 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3277208 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3277208 ']' 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3277208 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3277208 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3277208' 00:18:35.798 killing process with pid 3277208 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3277208 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3277208 00:18:35.798 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:35.798 [2024-12-09 11:57:27.391048] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:18:35.798 [2024-12-09 11:57:27.391100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277208 ] 00:18:35.798 [2024-12-09 11:57:27.468126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.798 [2024-12-09 11:57:27.511211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.798 Running I/O for 15 seconds... 00:18:35.798 17127.00 IOPS, 66.90 MiB/s [2024-12-09T10:57:43.851Z] 9284.50 IOPS, 36.27 MiB/s [2024-12-09T10:57:43.851Z] [2024-12-09 11:57:30.330340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.798 [2024-12-09 11:57:30.330573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180f00 00:18:35.798 [2024-12-09 11:57:30.330580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.330985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.330992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.331001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.331008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.331016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180f00 00:18:35.799 [2024-12-09 11:57:30.331023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.799 [2024-12-09 11:57:30.331031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180f00 00:18:35.800 [2024-12-09 11:57:30.331445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.800 [2024-12-09 11:57:30.331453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x180f00 00:18:35.801 [2024-12-09 11:57:30.331864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.801 [2024-12-09 11:57:30.331873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.331879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.331888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.331894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.331903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.331910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.331921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.331928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.331936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.331942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.331950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.331957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.331966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.331974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.331982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.331989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.331997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.332170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.332178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.340538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.340550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.340558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.340566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.340573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.340582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.340589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.340597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x180f00 00:18:35.802 [2024-12-09 11:57:30.340603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.340613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.802 [2024-12-09 11:57:30.340621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.340630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.802 [2024-12-09 11:57:30.340636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.802 [2024-12-09 11:57:30.340644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.802 [2024-12-09 11:57:30.340651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.340658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:30.340666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.340674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:30.340681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.340692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:30.340698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.340706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:30.340714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.340722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:30.340728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.342853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.803 [2024-12-09 11:57:30.342870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.803 [2024-12-09 11:57:30.342879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:8 PRP1 0x0 PRP2 0x0 00:18:35.803 [2024-12-09 11:57:30.342889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.342934] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:18:35.803 [2024-12-09 11:57:30.342947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:35.803 [2024-12-09 11:57:30.342994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.803 [2024-12-09 11:57:30.343006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:18926c0 sqhd:6b40 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.343017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.803 [2024-12-09 11:57:30.343026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:18926c0 sqhd:6b40 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.343035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.803 [2024-12-09 11:57:30.343045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:18926c0 sqhd:6b40 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.343054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.803 [2024-12-09 11:57:30.343063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:18926c0 sqhd:6b40 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:30.361353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:35.803 [2024-12-09 11:57:30.361371] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:18:35.803 [2024-12-09 11:57:30.361381] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:18:35.803 [2024-12-09 11:57:30.364304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:35.803 [2024-12-09 11:57:30.408512] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:35.803 10915.00 IOPS, 42.64 MiB/s [2024-12-09T10:57:43.856Z] 12487.00 IOPS, 48.78 MiB/s [2024-12-09T10:57:43.856Z] 11859.00 IOPS, 46.32 MiB/s [2024-12-09T10:57:43.856Z] [2024-12-09 11:57:33.801281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183700 00:18:35.803 [2024-12-09 11:57:33.801309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183700 00:18:35.803 [2024-12-09 11:57:33.801333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:33.801353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:33.801368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:33.801382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:33.801397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:33.801412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:33.801427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:33.801441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.803 [2024-12-09 11:57:33.801456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x183700 00:18:35.803 [2024-12-09 11:57:33.801471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183700 00:18:35.803 [2024-12-09 11:57:33.801485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183700 00:18:35.803 [2024-12-09 11:57:33.801506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.803 [2024-12-09 11:57:33.801514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.804 [2024-12-09 11:57:33.801741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.804 [2024-12-09 11:57:33.801757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.804 [2024-12-09 11:57:33.801772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.804 [2024-12-09 11:57:33.801786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.804 [2024-12-09 11:57:33.801800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.804 [2024-12-09 11:57:33.801819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.804 [2024-12-09 11:57:33.801833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.804 [2024-12-09 11:57:33.801848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183700 00:18:35.804 [2024-12-09 11:57:33.801879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.804 [2024-12-09 11:57:33.801887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.801894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.801902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.801909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.801917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.801924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.801932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.801938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.801946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.801953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.801961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.801968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.801975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.801982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.801990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.801996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.802101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.802116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.802130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.802145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.802159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.802174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.802189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183700 00:18:35.805 [2024-12-09 11:57:33.802204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.805 [2024-12-09 11:57:33.802300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.805 [2024-12-09 11:57:33.802306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.806 [2024-12-09 11:57:33.802681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183700 00:18:35.806 [2024-12-09 11:57:33.802726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.806 [2024-12-09 11:57:33.802734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.802924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.807 [2024-12-09 11:57:33.802944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.807 [2024-12-09 11:57:33.802959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.807 [2024-12-09 11:57:33.802974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.807 [2024-12-09 11:57:33.802989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.802997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.807 [2024-12-09 11:57:33.803004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.807 [2024-12-09 11:57:33.803019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.807 [2024-12-09 11:57:33.803035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.807 [2024-12-09 11:57:33.803050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.803065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.803079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.803095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.803111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.803129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.803145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183700 00:18:35.807 [2024-12-09 11:57:33.803161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.807 [2024-12-09 11:57:33.803169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x183700 00:18:35.808 [2024-12-09 11:57:33.803176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:33.803184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:33.803191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:33.803200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:33.803207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:33.803215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:33.803222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:33.803230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:33.803236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:33.803244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:33.803251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:33.804981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.808 [2024-12-09 11:57:33.804994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.808 [2024-12-09 11:57:33.805001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:18:35.808 [2024-12-09 11:57:33.805008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:33.805046] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:18:35.808 [2024-12-09 11:57:33.805056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:35.808 [2024-12-09 11:57:33.807892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:35.808 [2024-12-09 11:57:33.822878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:18:35.808 [2024-12-09 11:57:33.858803] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:35.808 10989.50 IOPS, 42.93 MiB/s [2024-12-09T10:57:43.861Z] 11907.71 IOPS, 46.51 MiB/s [2024-12-09T10:57:43.861Z] 12595.62 IOPS, 49.20 MiB/s [2024-12-09T10:57:43.861Z] 13108.22 IOPS, 51.20 MiB/s [2024-12-09T10:57:43.861Z] [2024-12-09 11:57:38.253218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:38.253318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:38.253332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:38.253348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:38.253363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:38.253378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:38.253393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:38.253408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.808 [2024-12-09 11:57:38.253429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.808 [2024-12-09 11:57:38.253518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x180f00 00:18:35.808 [2024-12-09 11:57:38.253524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.809 [2024-12-09 11:57:38.253574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.809 [2024-12-09 11:57:38.253590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.809 [2024-12-09 11:57:38.253608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.809 [2024-12-09 11:57:38.253628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.809 [2024-12-09 11:57:38.253645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.809 [2024-12-09 11:57:38.253661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.809 [2024-12-09 11:57:38.253677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.809 [2024-12-09 11:57:38.253691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.809 [2024-12-09 11:57:38.253967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180f00 00:18:35.809 [2024-12-09 11:57:38.253973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.253984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180f00 00:18:35.810 [2024-12-09 11:57:38.253990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.253999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180f00 00:18:35.810 [2024-12-09 11:57:38.254006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180f00 00:18:35.810 [2024-12-09 11:57:38.254022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.810 [2024-12-09 11:57:38.254334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.810 [2024-12-09 11:57:38.254342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.811 [2024-12-09 11:57:38.254670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.811 [2024-12-09 11:57:38.254772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x180f00 00:18:35.811 [2024-12-09 11:57:38.254778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x180f00 00:18:35.812 [2024-12-09 11:57:38.254793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x180f00 00:18:35.812 [2024-12-09 11:57:38.254811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x180f00 00:18:35.812 [2024-12-09 11:57:38.254826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x180f00 00:18:35.812 [2024-12-09 11:57:38.254841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x180f00 00:18:35.812 [2024-12-09 11:57:38.254856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x180f00 00:18:35.812 [2024-12-09 11:57:38.254876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x180f00 00:18:35.812 [2024-12-09 11:57:38.254891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180f00 00:18:35.812 [2024-12-09 11:57:38.254910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180f00 00:18:35.812 [2024-12-09 11:57:38.254925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.254940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.254957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.254971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.254986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.254994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.255203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.812 [2024-12-09 11:57:38.255210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:79f4000 sqhd:7210 p:0 m:0 dnr:0 00:18:35.812 [2024-12-09 11:57:38.256991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.812 [2024-12-09 11:57:38.257003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.813 [2024-12-09 11:57:38.257009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51160 len:8 PRP1 0x0 PRP2 0x0 00:18:35.813 [2024-12-09 11:57:38.257016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.813 [2024-12-09 11:57:38.257055] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:18:35.813 [2024-12-09 11:57:38.257064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:35.813 [2024-12-09 11:57:38.261180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:35.813 11797.40 IOPS, 46.08 MiB/s [2024-12-09T10:57:43.866Z] [2024-12-09 11:57:38.275883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:18:35.813 [2024-12-09 11:57:38.311558] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:35.813 12215.09 IOPS, 47.72 MiB/s [2024-12-09T10:57:43.866Z] 12640.67 IOPS, 49.38 MiB/s [2024-12-09T10:57:43.866Z] 13004.23 IOPS, 50.80 MiB/s [2024-12-09T10:57:43.866Z] 13318.14 IOPS, 52.02 MiB/s 00:18:35.813 Latency(us) 00:18:35.813 [2024-12-09T10:57:43.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.813 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.813 Verification LBA range: start 0x0 length 0x4000 00:18:35.813 NVMe0n1 : 15.01 13586.79 53.07 285.04 0.00 9203.87 337.43 1046578.71 00:18:35.813 [2024-12-09T10:57:43.866Z] =================================================================================================================== 00:18:35.813 [2024-12-09T10:57:43.866Z] Total : 13586.79 53.07 285.04 0.00 9203.87 337.43 1046578.71 00:18:35.813 Received shutdown signal, test time was about 15.000000 seconds 00:18:35.813 00:18:35.813 Latency(us) 00:18:35.813 [2024-12-09T10:57:43.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.813 [2024-12-09T10:57:43.866Z] =================================================================================================================== 00:18:35.813 [2024-12-09T10:57:43.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3279838 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3279838 /var/tmp/bdevperf.sock 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3279838 ']' 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:35.813 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:36.072 [2024-12-09 11:57:43.961315] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:36.072 11:57:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:36.330 [2024-12-09 11:57:44.174111] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:18:36.330 11:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:36.593 NVMe0n1 00:18:36.593 11:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:36.852 00:18:36.852 11:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:37.110 00:18:37.110 11:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:37.110 11:57:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:37.374 11:57:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.374 11:57:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:40.661 11:57:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:40.661 11:57:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:40.661 11:57:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3280755 00:18:40.661 11:57:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:40.661 11:57:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3280755 00:18:42.038 { 00:18:42.038 "results": [ 00:18:42.038 { 00:18:42.038 "job": "NVMe0n1", 00:18:42.038 "core_mask": "0x1", 00:18:42.038 "workload": "verify", 00:18:42.038 "status": "finished", 00:18:42.038 "verify_range": { 00:18:42.038 "start": 0, 00:18:42.038 "length": 16384 00:18:42.038 }, 00:18:42.038 "queue_depth": 128, 00:18:42.038 "io_size": 4096, 00:18:42.038 "runtime": 1.00721, 00:18:42.038 "iops": 17156.30305497364, 00:18:42.038 "mibps": 67.01680880849078, 00:18:42.038 "io_failed": 0, 00:18:42.038 "io_timeout": 0, 00:18:42.038 "avg_latency_us": 7422.265340388008, 00:18:42.038 "min_latency_us": 2777.478095238095, 00:18:42.038 "max_latency_us": 13544.106666666667 00:18:42.038 } 00:18:42.038 ], 00:18:42.038 "core_count": 1 00:18:42.038 } 00:18:42.038 11:57:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:42.038 [2024-12-09 11:57:43.574097] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:18:42.038 [2024-12-09 11:57:43.574152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279838 ] 00:18:42.038 [2024-12-09 11:57:43.653522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.038 [2024-12-09 11:57:43.691032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.038 [2024-12-09 11:57:45.355130] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:18:42.038 [2024-12-09 11:57:45.355660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:42.038 [2024-12-09 11:57:45.355691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:42.038 [2024-12-09 11:57:45.374899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:18:42.038 [2024-12-09 11:57:45.391312] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:42.038 Running I/O for 1 seconds... 00:18:42.038 17152.00 IOPS, 67.00 MiB/s 00:18:42.038 Latency(us) 00:18:42.038 [2024-12-09T10:57:50.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.038 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:42.038 Verification LBA range: start 0x0 length 0x4000 00:18:42.038 NVMe0n1 : 1.01 17156.30 67.02 0.00 0.00 7422.27 2777.48 13544.11 00:18:42.038 [2024-12-09T10:57:50.091Z] =================================================================================================================== 00:18:42.038 [2024-12-09T10:57:50.091Z] Total : 17156.30 67.02 0.00 0.00 7422.27 2777.48 13544.11 00:18:42.038 11:57:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:42.038 11:57:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:42.038 11:57:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:42.297 11:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:42.297 11:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:42.297 11:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:42.556 11:57:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3279838 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3279838 ']' 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3279838 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3279838 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3279838' 00:18:45.841 killing process with pid 3279838 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3279838 00:18:45.841 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3279838 00:18:46.100 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:46.100 11:57:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:46.359 rmmod nvme_rdma 00:18:46.359 rmmod nvme_fabrics 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3276824 ']' 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3276824 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3276824 ']' 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3276824 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276824 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276824' 00:18:46.359 killing process with pid 3276824 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3276824 00:18:46.359 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3276824 00:18:46.618 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:46.618 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:46.618 00:18:46.618 real 0m35.099s 00:18:46.618 user 1m59.083s 00:18:46.618 sys 0m6.278s 00:18:46.618 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.618 11:57:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:46.618 ************************************ 00:18:46.618 END TEST nvmf_failover 00:18:46.618 ************************************ 00:18:46.618 11:57:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:18:46.618 11:57:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:46.618 11:57:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.618 11:57:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.618 ************************************ 00:18:46.618 START TEST nvmf_host_discovery 00:18:46.618 ************************************ 00:18:46.618 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:18:46.878 * Looking for test storage... 00:18:46.878 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:46.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.878 --rc genhtml_branch_coverage=1 00:18:46.878 --rc genhtml_function_coverage=1 00:18:46.878 --rc genhtml_legend=1 00:18:46.878 --rc geninfo_all_blocks=1 00:18:46.878 --rc geninfo_unexecuted_blocks=1 00:18:46.878 00:18:46.878 ' 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:46.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.878 --rc genhtml_branch_coverage=1 00:18:46.878 --rc genhtml_function_coverage=1 00:18:46.878 --rc genhtml_legend=1 00:18:46.878 --rc geninfo_all_blocks=1 00:18:46.878 --rc geninfo_unexecuted_blocks=1 00:18:46.878 00:18:46.878 ' 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:46.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.878 --rc genhtml_branch_coverage=1 00:18:46.878 --rc genhtml_function_coverage=1 00:18:46.878 --rc genhtml_legend=1 00:18:46.878 --rc geninfo_all_blocks=1 00:18:46.878 --rc geninfo_unexecuted_blocks=1 00:18:46.878 00:18:46.878 ' 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:46.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.878 --rc genhtml_branch_coverage=1 00:18:46.878 --rc genhtml_function_coverage=1 00:18:46.878 --rc genhtml_legend=1 00:18:46.878 --rc geninfo_all_blocks=1 00:18:46.878 --rc geninfo_unexecuted_blocks=1 00:18:46.878 00:18:46.878 ' 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.878 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.879 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:46.879 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:18:46.879 00:18:46.879 real 0m0.196s 00:18:46.879 user 0m0.123s 00:18:46.879 sys 0m0.087s 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.879 ************************************ 00:18:46.879 END TEST nvmf_host_discovery 00:18:46.879 ************************************ 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.879 ************************************ 00:18:46.879 START TEST nvmf_host_multipath_status 00:18:46.879 ************************************ 00:18:46.879 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:18:47.138 * Looking for test storage... 00:18:47.138 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:47.138 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:47.138 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:18:47.138 11:57:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.138 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:47.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.139 --rc genhtml_branch_coverage=1 00:18:47.139 --rc genhtml_function_coverage=1 00:18:47.139 --rc genhtml_legend=1 00:18:47.139 --rc geninfo_all_blocks=1 00:18:47.139 --rc geninfo_unexecuted_blocks=1 00:18:47.139 00:18:47.139 ' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:47.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.139 --rc genhtml_branch_coverage=1 00:18:47.139 --rc genhtml_function_coverage=1 00:18:47.139 --rc genhtml_legend=1 00:18:47.139 --rc geninfo_all_blocks=1 00:18:47.139 --rc geninfo_unexecuted_blocks=1 00:18:47.139 00:18:47.139 ' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:47.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.139 --rc genhtml_branch_coverage=1 00:18:47.139 --rc genhtml_function_coverage=1 00:18:47.139 --rc genhtml_legend=1 00:18:47.139 --rc geninfo_all_blocks=1 00:18:47.139 --rc geninfo_unexecuted_blocks=1 00:18:47.139 00:18:47.139 ' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:47.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.139 --rc genhtml_branch_coverage=1 00:18:47.139 --rc genhtml_function_coverage=1 00:18:47.139 --rc genhtml_legend=1 00:18:47.139 --rc geninfo_all_blocks=1 00:18:47.139 --rc geninfo_unexecuted_blocks=1 00:18:47.139 00:18:47.139 ' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.139 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:47.139 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:18:47.140 11:57:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:53.708 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:53.708 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:53.708 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:53.709 Found net devices under 0000:da:00.0: mlx_0_0 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:53.709 Found net devices under 0000:da:00.1: mlx_0_1 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:53.709 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:53.709 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:18:53.709 altname enp218s0f0np0 00:18:53.709 altname ens818f0np0 00:18:53.709 inet 192.168.100.8/24 scope global mlx_0_0 00:18:53.709 valid_lft forever preferred_lft forever 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:53.709 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:53.709 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:18:53.709 altname enp218s0f1np1 00:18:53.709 altname ens818f1np1 00:18:53.709 inet 192.168.100.9/24 scope global mlx_0_1 00:18:53.709 valid_lft forever preferred_lft forever 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:53.709 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:53.710 192.168.100.9' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:53.710 192.168.100.9' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:53.710 192.168.100.9' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3284835 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3284835 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3284835 ']' 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.710 11:58:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.710 [2024-12-09 11:58:00.983878] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:18:53.710 [2024-12-09 11:58:00.983933] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.710 [2024-12-09 11:58:01.063844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:53.710 [2024-12-09 11:58:01.103928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.710 [2024-12-09 11:58:01.103965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.710 [2024-12-09 11:58:01.103972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.710 [2024-12-09 11:58:01.103977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.710 [2024-12-09 11:58:01.103982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.710 [2024-12-09 11:58:01.105161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.710 [2024-12-09 11:58:01.105162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3284835 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:53.710 [2024-12-09 11:58:01.435770] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x125b1f0/0x125f6e0) succeed. 00:18:53.710 [2024-12-09 11:58:01.445613] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x125c740/0x12a0d80) succeed. 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:53.710 Malloc0 00:18:53.710 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:53.970 11:58:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:54.229 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:54.488 [2024-12-09 11:58:02.302775] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:54.488 [2024-12-09 11:58:02.507138] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3285192 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3285192 /var/tmp/bdevperf.sock 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3285192 ']' 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.488 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:54.747 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.747 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:54.747 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:55.006 11:58:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:55.264 Nvme0n1 00:18:55.264 11:58:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:55.523 Nvme0n1 00:18:55.523 11:58:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:55.523 11:58:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:58.056 11:58:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:58.056 11:58:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:18:58.056 11:58:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:58.056 11:58:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:58.992 11:58:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:58.992 11:58:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:58.992 11:58:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.992 11:58:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:59.251 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.251 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:59.251 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.251 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:59.251 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:59.251 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:59.251 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.251 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:59.511 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.511 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:59.511 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.511 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:59.770 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.770 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:59.770 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.770 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:00.028 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.028 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:00.028 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.028 11:58:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:00.287 11:58:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.287 11:58:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:00.287 11:58:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:00.287 11:58:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:19:00.545 11:58:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:01.481 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:01.481 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:01.481 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.481 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:01.739 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:01.739 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:01.740 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.740 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:01.998 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.998 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:01.998 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.998 11:58:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:02.259 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.259 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:02.259 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.259 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:02.259 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.259 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:02.259 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:02.259 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.519 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.519 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:02.519 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:02.519 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.777 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.777 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:02.777 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:03.036 11:58:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:19:03.294 11:58:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:04.233 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:04.233 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:04.233 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.233 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:04.493 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.493 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:04.493 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.493 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:04.493 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.493 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:04.493 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.493 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:04.752 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.752 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:04.752 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:04.752 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.010 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.010 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:05.010 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.010 11:58:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:05.269 11:58:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.269 11:58:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:05.269 11:58:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.269 11:58:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:05.269 11:58:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.269 11:58:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:05.269 11:58:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:05.528 11:58:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:19:05.787 11:58:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:06.725 11:58:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:06.725 11:58:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:06.725 11:58:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.725 11:58:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:06.984 11:58:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.984 11:58:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:06.984 11:58:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.984 11:58:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:07.243 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.243 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.243 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.243 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:07.503 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.503 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:07.503 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.503 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:07.503 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.503 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:07.503 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.503 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:07.761 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.761 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:07.762 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.762 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.020 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:08.020 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:08.020 11:58:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:19:08.279 11:58:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:19:08.279 11:58:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.657 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:09.916 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.916 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:09.916 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.916 11:58:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:10.174 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.174 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:10.174 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.174 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:10.433 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:10.433 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:10.433 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.433 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:10.692 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:10.692 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:10.692 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:19:10.693 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:19:10.951 11:58:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:11.890 11:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:11.890 11:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:11.890 11:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.890 11:58:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:12.149 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:12.149 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:12.149 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.149 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:12.407 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.407 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:12.407 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.407 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:12.666 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.666 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:12.666 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:12.666 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.666 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.666 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:12.666 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.666 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:12.925 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:12.925 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:12.925 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.925 11:58:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:13.184 11:58:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.184 11:58:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:13.442 11:58:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:13.442 11:58:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:19:13.442 11:58:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:19:13.701 11:58:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:14.638 11:58:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:14.638 11:58:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:14.638 11:58:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.638 11:58:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:14.897 11:58:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.897 11:58:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:14.897 11:58:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.897 11:58:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:15.155 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.155 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:15.155 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:15.155 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.414 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.414 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:15.414 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.414 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:15.673 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.673 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:15.673 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.673 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:15.673 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.673 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:15.673 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.673 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:15.932 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.932 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:15.932 11:58:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:16.191 11:58:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:19:16.450 11:58:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:17.387 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:17.387 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:17.387 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.387 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:17.646 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:17.646 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:17.646 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.646 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:17.646 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.646 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:17.646 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.646 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:17.905 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.905 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:17.905 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.905 11:58:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:18.164 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.164 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:18.164 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.164 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:18.423 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.423 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:18.423 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.423 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:18.423 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.423 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:18.423 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:18.682 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:19:18.941 11:58:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:19.877 11:58:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:19.877 11:58:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:19.877 11:58:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.877 11:58:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:20.136 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.136 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:20.136 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.136 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:20.395 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.395 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:20.395 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.395 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:20.654 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.654 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:20.654 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.654 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:20.654 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.654 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:20.654 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:20.654 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.913 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.913 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:20.913 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.913 11:58:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:21.171 11:58:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.171 11:58:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:21.171 11:58:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:21.429 11:58:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:19:21.429 11:58:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.805 11:58:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:23.064 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.064 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:23.064 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.064 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:23.323 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.323 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:23.323 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.323 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:23.581 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.581 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:23.581 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.581 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:23.843 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:23.843 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3285192 00:19:23.843 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3285192 ']' 00:19:23.843 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3285192 00:19:23.843 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:23.843 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.844 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3285192 00:19:23.844 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:23.844 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:23.844 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3285192' 00:19:23.844 killing process with pid 3285192 00:19:23.844 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3285192 00:19:23.844 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3285192 00:19:23.844 { 00:19:23.844 "results": [ 00:19:23.844 { 00:19:23.844 "job": "Nvme0n1", 00:19:23.844 "core_mask": "0x4", 00:19:23.844 "workload": "verify", 00:19:23.844 "status": "terminated", 00:19:23.844 "verify_range": { 00:19:23.844 "start": 0, 00:19:23.844 "length": 16384 00:19:23.844 }, 00:19:23.844 "queue_depth": 128, 00:19:23.844 "io_size": 4096, 00:19:23.844 "runtime": 28.020475, 00:19:23.844 "iops": 15377.540887511721, 00:19:23.844 "mibps": 60.06851909184266, 00:19:23.844 "io_failed": 0, 00:19:23.844 "io_timeout": 0, 00:19:23.844 "avg_latency_us": 8303.889630734282, 00:19:23.844 "min_latency_us": 581.2419047619047, 00:19:23.844 "max_latency_us": 3019898.88 00:19:23.844 } 00:19:23.844 ], 00:19:23.844 "core_count": 1 00:19:23.844 } 00:19:23.844 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3285192 00:19:23.844 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:23.844 [2024-12-09 11:58:02.579661] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:19:23.844 [2024-12-09 11:58:02.579719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285192 ] 00:19:23.844 [2024-12-09 11:58:02.659830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.844 [2024-12-09 11:58:02.700095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.844 Running I/O for 90 seconds... 00:19:23.844 17664.00 IOPS, 69.00 MiB/s [2024-12-09T10:58:31.897Z] 17792.00 IOPS, 69.50 MiB/s [2024-12-09T10:58:31.897Z] 17834.67 IOPS, 69.67 MiB/s [2024-12-09T10:58:31.897Z] 17856.00 IOPS, 69.75 MiB/s [2024-12-09T10:58:31.897Z] 17843.20 IOPS, 69.70 MiB/s [2024-12-09T10:58:31.897Z] 17856.00 IOPS, 69.75 MiB/s [2024-12-09T10:58:31.897Z] 17845.86 IOPS, 69.71 MiB/s [2024-12-09T10:58:31.897Z] 17837.50 IOPS, 69.68 MiB/s [2024-12-09T10:58:31.897Z] 17841.67 IOPS, 69.69 MiB/s [2024-12-09T10:58:31.897Z] 17841.20 IOPS, 69.69 MiB/s [2024-12-09T10:58:31.897Z] 17849.45 IOPS, 69.72 MiB/s [2024-12-09T10:58:31.897Z] 17855.50 IOPS, 69.75 MiB/s [2024-12-09T10:58:31.897Z] [2024-12-09 11:58:16.101939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.101978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182700 00:19:23.844 [2024-12-09 11:58:16.102605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:23.844 [2024-12-09 11:58:16.102616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.102623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.102633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.102640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.102650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.102657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.102667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.102674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.102685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.102692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.102705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.102711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.103023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.103042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.103092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.103109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182700 00:19:23.845 [2024-12-09 11:58:16.103126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.845 [2024-12-09 11:58:16.103585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.845 [2024-12-09 11:58:16.103595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.103983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.103994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.846 [2024-12-09 11:58:16.104188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182700 00:19:23.846 [2024-12-09 11:58:16.104205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182700 00:19:23.846 [2024-12-09 11:58:16.104242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182700 00:19:23.846 [2024-12-09 11:58:16.104260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182700 00:19:23.846 [2024-12-09 11:58:16.104277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:23.846 [2024-12-09 11:58:16.104290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182700 00:19:23.846 [2024-12-09 11:58:16.104297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:16.104549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:16.104568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:16.104586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:16.104604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:16.104620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:16.104638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:16.104655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:16.104673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:16.104691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:16.104707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:16.104714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:23.847 17119.62 IOPS, 66.87 MiB/s [2024-12-09T10:58:31.900Z] 15896.79 IOPS, 62.10 MiB/s [2024-12-09T10:58:31.900Z] 14837.00 IOPS, 57.96 MiB/s [2024-12-09T10:58:31.900Z] 14506.38 IOPS, 56.67 MiB/s [2024-12-09T10:58:31.900Z] 14697.12 IOPS, 57.41 MiB/s [2024-12-09T10:58:31.900Z] 14849.39 IOPS, 58.01 MiB/s [2024-12-09T10:58:31.900Z] 14848.00 IOPS, 58.00 MiB/s [2024-12-09T10:58:31.900Z] 14848.45 IOPS, 58.00 MiB/s [2024-12-09T10:58:31.900Z] 14936.81 IOPS, 58.35 MiB/s [2024-12-09T10:58:31.900Z] 15075.64 IOPS, 58.89 MiB/s [2024-12-09T10:58:31.900Z] 15200.70 IOPS, 59.38 MiB/s [2024-12-09T10:58:31.900Z] 15207.38 IOPS, 59.40 MiB/s [2024-12-09T10:58:31.900Z] 15186.32 IOPS, 59.32 MiB/s [2024-12-09T10:58:31.900Z] [2024-12-09 11:58:29.439857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.439895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.439927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.439936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.439947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.439954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.439964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:29.439971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.439981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.439987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.439997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.440004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.440015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.440022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.440516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:29.440526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.440536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.440544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.440553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.440564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.440574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.847 [2024-12-09 11:58:29.440580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.440589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.440596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:23.847 [2024-12-09 11:58:29.440605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182700 00:19:23.847 [2024-12-09 11:58:29.440612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.440971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.440987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.440996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.441002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.441011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.441018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.441028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.441035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.441114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.441124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.441134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.441140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.441150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.441157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.441166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.441173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.441182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.441189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.441200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.848 [2024-12-09 11:58:29.441206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.848 [2024-12-09 11:58:29.441216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182700 00:19:23.848 [2024-12-09 11:58:29.441223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182700 00:19:23.849 [2024-12-09 11:58:29.441239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182700 00:19:23.849 [2024-12-09 11:58:29.441255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.849 [2024-12-09 11:58:29.441272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x182700 00:19:23.849 [2024-12-09 11:58:29.441288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.849 [2024-12-09 11:58:29.441303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.849 [2024-12-09 11:58:29.441320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182700 00:19:23.849 [2024-12-09 11:58:29.441335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.849 [2024-12-09 11:58:29.441350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182700 00:19:23.849 [2024-12-09 11:58:29.441366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.849 [2024-12-09 11:58:29.441381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x182700 00:19:23.849 [2024-12-09 11:58:29.441398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.849 [2024-12-09 11:58:29.441414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x182700 00:19:23.849 [2024-12-09 11:58:29.441430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.849 [2024-12-09 11:58:29.441446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.849 [2024-12-09 11:58:29.441461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x182700 00:19:23.849 [2024-12-09 11:58:29.441476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.849 [2024-12-09 11:58:29.441492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:23.849 [2024-12-09 11:58:29.441501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x182700 00:19:23.849 [2024-12-09 11:58:29.441508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:23.849 15193.69 IOPS, 59.35 MiB/s [2024-12-09T10:58:31.902Z] 15289.22 IOPS, 59.72 MiB/s [2024-12-09T10:58:31.902Z] 15382.54 IOPS, 60.09 MiB/s [2024-12-09T10:58:31.902Z] Received shutdown signal, test time was about 28.021098 seconds 00:19:23.849 00:19:23.849 Latency(us) 00:19:23.849 [2024-12-09T10:58:31.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.849 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:23.849 Verification LBA range: start 0x0 length 0x4000 00:19:23.849 Nvme0n1 : 28.02 15377.54 60.07 0.00 0.00 8303.89 581.24 3019898.88 00:19:23.849 [2024-12-09T10:58:31.902Z] =================================================================================================================== 00:19:23.849 [2024-12-09T10:58:31.902Z] Total : 15377.54 60.07 0.00 0.00 8303.89 581.24 3019898.88 00:19:23.849 11:58:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:24.108 rmmod nvme_rdma 00:19:24.108 rmmod nvme_fabrics 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3284835 ']' 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3284835 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3284835 ']' 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3284835 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.108 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3284835 00:19:24.367 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.367 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.367 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3284835' 00:19:24.367 killing process with pid 3284835 00:19:24.367 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3284835 00:19:24.367 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3284835 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:24.627 00:19:24.627 real 0m37.573s 00:19:24.627 user 1m50.367s 00:19:24.627 sys 0m7.708s 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:24.627 ************************************ 00:19:24.627 END TEST nvmf_host_multipath_status 00:19:24.627 ************************************ 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.627 ************************************ 00:19:24.627 START TEST nvmf_discovery_remove_ifc 00:19:24.627 ************************************ 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:19:24.627 * Looking for test storage... 00:19:24.627 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:24.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.627 --rc genhtml_branch_coverage=1 00:19:24.627 --rc genhtml_function_coverage=1 00:19:24.627 --rc genhtml_legend=1 00:19:24.627 --rc geninfo_all_blocks=1 00:19:24.627 --rc geninfo_unexecuted_blocks=1 00:19:24.627 00:19:24.627 ' 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:24.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.627 --rc genhtml_branch_coverage=1 00:19:24.627 --rc genhtml_function_coverage=1 00:19:24.627 --rc genhtml_legend=1 00:19:24.627 --rc geninfo_all_blocks=1 00:19:24.627 --rc geninfo_unexecuted_blocks=1 00:19:24.627 00:19:24.627 ' 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:24.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.627 --rc genhtml_branch_coverage=1 00:19:24.627 --rc genhtml_function_coverage=1 00:19:24.627 --rc genhtml_legend=1 00:19:24.627 --rc geninfo_all_blocks=1 00:19:24.627 --rc geninfo_unexecuted_blocks=1 00:19:24.627 00:19:24.627 ' 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:24.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.627 --rc genhtml_branch_coverage=1 00:19:24.627 --rc genhtml_function_coverage=1 00:19:24.627 --rc genhtml_legend=1 00:19:24.627 --rc geninfo_all_blocks=1 00:19:24.627 --rc geninfo_unexecuted_blocks=1 00:19:24.627 00:19:24.627 ' 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.627 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.887 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:24.887 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:24.887 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.887 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.888 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:24.888 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:19:24.888 00:19:24.888 real 0m0.196s 00:19:24.888 user 0m0.122s 00:19:24.888 sys 0m0.087s 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:24.888 ************************************ 00:19:24.888 END TEST nvmf_discovery_remove_ifc 00:19:24.888 ************************************ 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.888 ************************************ 00:19:24.888 START TEST nvmf_identify_kernel_target 00:19:24.888 ************************************ 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:24.888 * Looking for test storage... 00:19:24.888 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.888 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:25.148 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:25.148 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.148 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:25.148 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:25.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.149 --rc genhtml_branch_coverage=1 00:19:25.149 --rc genhtml_function_coverage=1 00:19:25.149 --rc genhtml_legend=1 00:19:25.149 --rc geninfo_all_blocks=1 00:19:25.149 --rc geninfo_unexecuted_blocks=1 00:19:25.149 00:19:25.149 ' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:25.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.149 --rc genhtml_branch_coverage=1 00:19:25.149 --rc genhtml_function_coverage=1 00:19:25.149 --rc genhtml_legend=1 00:19:25.149 --rc geninfo_all_blocks=1 00:19:25.149 --rc geninfo_unexecuted_blocks=1 00:19:25.149 00:19:25.149 ' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:25.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.149 --rc genhtml_branch_coverage=1 00:19:25.149 --rc genhtml_function_coverage=1 00:19:25.149 --rc genhtml_legend=1 00:19:25.149 --rc geninfo_all_blocks=1 00:19:25.149 --rc geninfo_unexecuted_blocks=1 00:19:25.149 00:19:25.149 ' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:25.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.149 --rc genhtml_branch_coverage=1 00:19:25.149 --rc genhtml_function_coverage=1 00:19:25.149 --rc genhtml_legend=1 00:19:25.149 --rc geninfo_all_blocks=1 00:19:25.149 --rc geninfo_unexecuted_blocks=1 00:19:25.149 00:19:25.149 ' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.149 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:25.149 11:58:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.720 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:31.721 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:31.721 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:31.721 Found net devices under 0000:da:00.0: mlx_0_0 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:31.721 Found net devices under 0000:da:00.1: mlx_0_1 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:31.721 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:31.721 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:31.721 altname enp218s0f0np0 00:19:31.721 altname ens818f0np0 00:19:31.721 inet 192.168.100.8/24 scope global mlx_0_0 00:19:31.721 valid_lft forever preferred_lft forever 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:31.721 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:31.722 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:31.722 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:31.722 altname enp218s0f1np1 00:19:31.722 altname ens818f1np1 00:19:31.722 inet 192.168.100.9/24 scope global mlx_0_1 00:19:31.722 valid_lft forever preferred_lft forever 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:31.722 192.168.100.9' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:31.722 192.168.100.9' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:31.722 192.168.100.9' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:31.722 11:58:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:19:33.627 Waiting for block devices as requested 00:19:33.627 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:19:33.627 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:33.627 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:33.886 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:33.886 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:33.886 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:34.146 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:34.146 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:34.146 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:34.146 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:34.405 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:34.405 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:34.405 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:34.664 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:34.664 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:34.664 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:34.923 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:34.923 No valid GPT data, bailing 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:34.923 11:58:42 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:19:35.182 00:19:35.182 Discovery Log Number of Records 2, Generation counter 2 00:19:35.182 =====Discovery Log Entry 0====== 00:19:35.182 trtype: rdma 00:19:35.182 adrfam: ipv4 00:19:35.182 subtype: current discovery subsystem 00:19:35.182 treq: not specified, sq flow control disable supported 00:19:35.182 portid: 1 00:19:35.182 trsvcid: 4420 00:19:35.182 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:35.182 traddr: 192.168.100.8 00:19:35.182 eflags: none 00:19:35.182 rdma_prtype: not specified 00:19:35.182 rdma_qptype: connected 00:19:35.182 rdma_cms: rdma-cm 00:19:35.182 rdma_pkey: 0x0000 00:19:35.182 =====Discovery Log Entry 1====== 00:19:35.182 trtype: rdma 00:19:35.182 adrfam: ipv4 00:19:35.182 subtype: nvme subsystem 00:19:35.182 treq: not specified, sq flow control disable supported 00:19:35.182 portid: 1 00:19:35.182 trsvcid: 4420 00:19:35.182 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:35.182 traddr: 192.168.100.8 00:19:35.182 eflags: none 00:19:35.182 rdma_prtype: not specified 00:19:35.182 rdma_qptype: connected 00:19:35.182 rdma_cms: rdma-cm 00:19:35.182 rdma_pkey: 0x0000 00:19:35.182 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:19:35.182 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:35.442 ===================================================== 00:19:35.442 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:35.442 ===================================================== 00:19:35.442 Controller Capabilities/Features 00:19:35.442 ================================ 00:19:35.442 Vendor ID: 0000 00:19:35.442 Subsystem Vendor ID: 0000 00:19:35.442 Serial Number: a4da771a380875fa7577 00:19:35.442 Model Number: Linux 00:19:35.442 Firmware Version: 6.8.9-20 00:19:35.442 Recommended Arb Burst: 0 00:19:35.442 IEEE OUI Identifier: 00 00 00 00:19:35.442 Multi-path I/O 00:19:35.442 May have multiple subsystem ports: No 00:19:35.442 May have multiple controllers: No 00:19:35.442 Associated with SR-IOV VF: No 00:19:35.442 Max Data Transfer Size: Unlimited 00:19:35.442 Max Number of Namespaces: 0 00:19:35.442 Max Number of I/O Queues: 1024 00:19:35.442 NVMe Specification Version (VS): 1.3 00:19:35.442 NVMe Specification Version (Identify): 1.3 00:19:35.442 Maximum Queue Entries: 128 00:19:35.442 Contiguous Queues Required: No 00:19:35.442 Arbitration Mechanisms Supported 00:19:35.442 Weighted Round Robin: Not Supported 00:19:35.442 Vendor Specific: Not Supported 00:19:35.442 Reset Timeout: 7500 ms 00:19:35.442 Doorbell Stride: 4 bytes 00:19:35.442 NVM Subsystem Reset: Not Supported 00:19:35.442 Command Sets Supported 00:19:35.442 NVM Command Set: Supported 00:19:35.442 Boot Partition: Not Supported 00:19:35.442 Memory Page Size Minimum: 4096 bytes 00:19:35.442 Memory Page Size Maximum: 4096 bytes 00:19:35.442 Persistent Memory Region: Not Supported 00:19:35.442 Optional Asynchronous Events Supported 00:19:35.442 Namespace Attribute Notices: Not Supported 00:19:35.442 Firmware Activation Notices: Not Supported 00:19:35.442 ANA Change Notices: Not Supported 00:19:35.442 PLE Aggregate Log Change Notices: Not Supported 00:19:35.442 LBA Status Info Alert Notices: Not Supported 00:19:35.442 EGE Aggregate Log Change Notices: Not Supported 00:19:35.442 Normal NVM Subsystem Shutdown event: Not Supported 00:19:35.442 Zone Descriptor Change Notices: Not Supported 00:19:35.442 Discovery Log Change Notices: Supported 00:19:35.442 Controller Attributes 00:19:35.442 128-bit Host Identifier: Not Supported 00:19:35.442 Non-Operational Permissive Mode: Not Supported 00:19:35.442 NVM Sets: Not Supported 00:19:35.442 Read Recovery Levels: Not Supported 00:19:35.442 Endurance Groups: Not Supported 00:19:35.442 Predictable Latency Mode: Not Supported 00:19:35.442 Traffic Based Keep ALive: Not Supported 00:19:35.442 Namespace Granularity: Not Supported 00:19:35.442 SQ Associations: Not Supported 00:19:35.442 UUID List: Not Supported 00:19:35.442 Multi-Domain Subsystem: Not Supported 00:19:35.442 Fixed Capacity Management: Not Supported 00:19:35.442 Variable Capacity Management: Not Supported 00:19:35.442 Delete Endurance Group: Not Supported 00:19:35.442 Delete NVM Set: Not Supported 00:19:35.442 Extended LBA Formats Supported: Not Supported 00:19:35.442 Flexible Data Placement Supported: Not Supported 00:19:35.442 00:19:35.442 Controller Memory Buffer Support 00:19:35.442 ================================ 00:19:35.442 Supported: No 00:19:35.442 00:19:35.442 Persistent Memory Region Support 00:19:35.442 ================================ 00:19:35.442 Supported: No 00:19:35.442 00:19:35.442 Admin Command Set Attributes 00:19:35.442 ============================ 00:19:35.442 Security Send/Receive: Not Supported 00:19:35.442 Format NVM: Not Supported 00:19:35.442 Firmware Activate/Download: Not Supported 00:19:35.442 Namespace Management: Not Supported 00:19:35.442 Device Self-Test: Not Supported 00:19:35.442 Directives: Not Supported 00:19:35.442 NVMe-MI: Not Supported 00:19:35.442 Virtualization Management: Not Supported 00:19:35.442 Doorbell Buffer Config: Not Supported 00:19:35.442 Get LBA Status Capability: Not Supported 00:19:35.442 Command & Feature Lockdown Capability: Not Supported 00:19:35.442 Abort Command Limit: 1 00:19:35.442 Async Event Request Limit: 1 00:19:35.442 Number of Firmware Slots: N/A 00:19:35.442 Firmware Slot 1 Read-Only: N/A 00:19:35.442 Firmware Activation Without Reset: N/A 00:19:35.442 Multiple Update Detection Support: N/A 00:19:35.442 Firmware Update Granularity: No Information Provided 00:19:35.442 Per-Namespace SMART Log: No 00:19:35.442 Asymmetric Namespace Access Log Page: Not Supported 00:19:35.442 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:35.442 Command Effects Log Page: Not Supported 00:19:35.442 Get Log Page Extended Data: Supported 00:19:35.442 Telemetry Log Pages: Not Supported 00:19:35.442 Persistent Event Log Pages: Not Supported 00:19:35.442 Supported Log Pages Log Page: May Support 00:19:35.442 Commands Supported & Effects Log Page: Not Supported 00:19:35.442 Feature Identifiers & Effects Log Page:May Support 00:19:35.442 NVMe-MI Commands & Effects Log Page: May Support 00:19:35.442 Data Area 4 for Telemetry Log: Not Supported 00:19:35.442 Error Log Page Entries Supported: 1 00:19:35.442 Keep Alive: Not Supported 00:19:35.442 00:19:35.442 NVM Command Set Attributes 00:19:35.442 ========================== 00:19:35.442 Submission Queue Entry Size 00:19:35.443 Max: 1 00:19:35.443 Min: 1 00:19:35.443 Completion Queue Entry Size 00:19:35.443 Max: 1 00:19:35.443 Min: 1 00:19:35.443 Number of Namespaces: 0 00:19:35.443 Compare Command: Not Supported 00:19:35.443 Write Uncorrectable Command: Not Supported 00:19:35.443 Dataset Management Command: Not Supported 00:19:35.443 Write Zeroes Command: Not Supported 00:19:35.443 Set Features Save Field: Not Supported 00:19:35.443 Reservations: Not Supported 00:19:35.443 Timestamp: Not Supported 00:19:35.443 Copy: Not Supported 00:19:35.443 Volatile Write Cache: Not Present 00:19:35.443 Atomic Write Unit (Normal): 1 00:19:35.443 Atomic Write Unit (PFail): 1 00:19:35.443 Atomic Compare & Write Unit: 1 00:19:35.443 Fused Compare & Write: Not Supported 00:19:35.443 Scatter-Gather List 00:19:35.443 SGL Command Set: Supported 00:19:35.443 SGL Keyed: Supported 00:19:35.443 SGL Bit Bucket Descriptor: Not Supported 00:19:35.443 SGL Metadata Pointer: Not Supported 00:19:35.443 Oversized SGL: Not Supported 00:19:35.443 SGL Metadata Address: Not Supported 00:19:35.443 SGL Offset: Supported 00:19:35.443 Transport SGL Data Block: Not Supported 00:19:35.443 Replay Protected Memory Block: Not Supported 00:19:35.443 00:19:35.443 Firmware Slot Information 00:19:35.443 ========================= 00:19:35.443 Active slot: 0 00:19:35.443 00:19:35.443 00:19:35.443 Error Log 00:19:35.443 ========= 00:19:35.443 00:19:35.443 Active Namespaces 00:19:35.443 ================= 00:19:35.443 Discovery Log Page 00:19:35.443 ================== 00:19:35.443 Generation Counter: 2 00:19:35.443 Number of Records: 2 00:19:35.443 Record Format: 0 00:19:35.443 00:19:35.443 Discovery Log Entry 0 00:19:35.443 ---------------------- 00:19:35.443 Transport Type: 1 (RDMA) 00:19:35.443 Address Family: 1 (IPv4) 00:19:35.443 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:35.443 Entry Flags: 00:19:35.443 Duplicate Returned Information: 0 00:19:35.443 Explicit Persistent Connection Support for Discovery: 0 00:19:35.443 Transport Requirements: 00:19:35.443 Secure Channel: Not Specified 00:19:35.443 Port ID: 1 (0x0001) 00:19:35.443 Controller ID: 65535 (0xffff) 00:19:35.443 Admin Max SQ Size: 32 00:19:35.443 Transport Service Identifier: 4420 00:19:35.443 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:35.443 Transport Address: 192.168.100.8 00:19:35.443 Transport Specific Address Subtype - RDMA 00:19:35.443 RDMA QP Service Type: 1 (Reliable Connected) 00:19:35.443 RDMA Provider Type: 1 (No provider specified) 00:19:35.443 RDMA CM Service: 1 (RDMA_CM) 00:19:35.443 Discovery Log Entry 1 00:19:35.443 ---------------------- 00:19:35.443 Transport Type: 1 (RDMA) 00:19:35.443 Address Family: 1 (IPv4) 00:19:35.443 Subsystem Type: 2 (NVM Subsystem) 00:19:35.443 Entry Flags: 00:19:35.443 Duplicate Returned Information: 0 00:19:35.443 Explicit Persistent Connection Support for Discovery: 0 00:19:35.443 Transport Requirements: 00:19:35.443 Secure Channel: Not Specified 00:19:35.443 Port ID: 1 (0x0001) 00:19:35.443 Controller ID: 65535 (0xffff) 00:19:35.443 Admin Max SQ Size: 32 00:19:35.443 Transport Service Identifier: 4420 00:19:35.443 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:35.443 Transport Address: 192.168.100.8 00:19:35.443 Transport Specific Address Subtype - RDMA 00:19:35.443 RDMA QP Service Type: 1 (Reliable Connected) 00:19:35.443 RDMA Provider Type: 1 (No provider specified) 00:19:35.443 RDMA CM Service: 1 (RDMA_CM) 00:19:35.443 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:35.443 get_feature(0x01) failed 00:19:35.443 get_feature(0x02) failed 00:19:35.443 get_feature(0x04) failed 00:19:35.443 ===================================================== 00:19:35.443 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:19:35.443 ===================================================== 00:19:35.443 Controller Capabilities/Features 00:19:35.443 ================================ 00:19:35.443 Vendor ID: 0000 00:19:35.443 Subsystem Vendor ID: 0000 00:19:35.443 Serial Number: ed06044c148a7e9f29b5 00:19:35.443 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:35.443 Firmware Version: 6.8.9-20 00:19:35.443 Recommended Arb Burst: 6 00:19:35.443 IEEE OUI Identifier: 00 00 00 00:19:35.443 Multi-path I/O 00:19:35.443 May have multiple subsystem ports: Yes 00:19:35.443 May have multiple controllers: Yes 00:19:35.443 Associated with SR-IOV VF: No 00:19:35.443 Max Data Transfer Size: 1048576 00:19:35.443 Max Number of Namespaces: 1024 00:19:35.443 Max Number of I/O Queues: 128 00:19:35.443 NVMe Specification Version (VS): 1.3 00:19:35.443 NVMe Specification Version (Identify): 1.3 00:19:35.443 Maximum Queue Entries: 128 00:19:35.443 Contiguous Queues Required: No 00:19:35.443 Arbitration Mechanisms Supported 00:19:35.443 Weighted Round Robin: Not Supported 00:19:35.443 Vendor Specific: Not Supported 00:19:35.443 Reset Timeout: 7500 ms 00:19:35.443 Doorbell Stride: 4 bytes 00:19:35.443 NVM Subsystem Reset: Not Supported 00:19:35.443 Command Sets Supported 00:19:35.443 NVM Command Set: Supported 00:19:35.443 Boot Partition: Not Supported 00:19:35.443 Memory Page Size Minimum: 4096 bytes 00:19:35.443 Memory Page Size Maximum: 4096 bytes 00:19:35.443 Persistent Memory Region: Not Supported 00:19:35.443 Optional Asynchronous Events Supported 00:19:35.443 Namespace Attribute Notices: Supported 00:19:35.443 Firmware Activation Notices: Not Supported 00:19:35.443 ANA Change Notices: Supported 00:19:35.443 PLE Aggregate Log Change Notices: Not Supported 00:19:35.443 LBA Status Info Alert Notices: Not Supported 00:19:35.443 EGE Aggregate Log Change Notices: Not Supported 00:19:35.443 Normal NVM Subsystem Shutdown event: Not Supported 00:19:35.443 Zone Descriptor Change Notices: Not Supported 00:19:35.443 Discovery Log Change Notices: Not Supported 00:19:35.443 Controller Attributes 00:19:35.443 128-bit Host Identifier: Supported 00:19:35.443 Non-Operational Permissive Mode: Not Supported 00:19:35.443 NVM Sets: Not Supported 00:19:35.443 Read Recovery Levels: Not Supported 00:19:35.443 Endurance Groups: Not Supported 00:19:35.443 Predictable Latency Mode: Not Supported 00:19:35.443 Traffic Based Keep ALive: Supported 00:19:35.443 Namespace Granularity: Not Supported 00:19:35.443 SQ Associations: Not Supported 00:19:35.443 UUID List: Not Supported 00:19:35.443 Multi-Domain Subsystem: Not Supported 00:19:35.443 Fixed Capacity Management: Not Supported 00:19:35.443 Variable Capacity Management: Not Supported 00:19:35.443 Delete Endurance Group: Not Supported 00:19:35.443 Delete NVM Set: Not Supported 00:19:35.443 Extended LBA Formats Supported: Not Supported 00:19:35.443 Flexible Data Placement Supported: Not Supported 00:19:35.443 00:19:35.443 Controller Memory Buffer Support 00:19:35.443 ================================ 00:19:35.443 Supported: No 00:19:35.443 00:19:35.443 Persistent Memory Region Support 00:19:35.443 ================================ 00:19:35.443 Supported: No 00:19:35.443 00:19:35.443 Admin Command Set Attributes 00:19:35.443 ============================ 00:19:35.443 Security Send/Receive: Not Supported 00:19:35.443 Format NVM: Not Supported 00:19:35.443 Firmware Activate/Download: Not Supported 00:19:35.443 Namespace Management: Not Supported 00:19:35.443 Device Self-Test: Not Supported 00:19:35.443 Directives: Not Supported 00:19:35.443 NVMe-MI: Not Supported 00:19:35.443 Virtualization Management: Not Supported 00:19:35.443 Doorbell Buffer Config: Not Supported 00:19:35.443 Get LBA Status Capability: Not Supported 00:19:35.443 Command & Feature Lockdown Capability: Not Supported 00:19:35.443 Abort Command Limit: 4 00:19:35.443 Async Event Request Limit: 4 00:19:35.443 Number of Firmware Slots: N/A 00:19:35.443 Firmware Slot 1 Read-Only: N/A 00:19:35.443 Firmware Activation Without Reset: N/A 00:19:35.443 Multiple Update Detection Support: N/A 00:19:35.443 Firmware Update Granularity: No Information Provided 00:19:35.443 Per-Namespace SMART Log: Yes 00:19:35.443 Asymmetric Namespace Access Log Page: Supported 00:19:35.443 ANA Transition Time : 10 sec 00:19:35.443 00:19:35.443 Asymmetric Namespace Access Capabilities 00:19:35.443 ANA Optimized State : Supported 00:19:35.443 ANA Non-Optimized State : Supported 00:19:35.443 ANA Inaccessible State : Supported 00:19:35.443 ANA Persistent Loss State : Supported 00:19:35.443 ANA Change State : Supported 00:19:35.443 ANAGRPID is not changed : No 00:19:35.443 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:35.443 00:19:35.443 ANA Group Identifier Maximum : 128 00:19:35.443 Number of ANA Group Identifiers : 128 00:19:35.443 Max Number of Allowed Namespaces : 1024 00:19:35.443 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:35.443 Command Effects Log Page: Supported 00:19:35.443 Get Log Page Extended Data: Supported 00:19:35.443 Telemetry Log Pages: Not Supported 00:19:35.444 Persistent Event Log Pages: Not Supported 00:19:35.444 Supported Log Pages Log Page: May Support 00:19:35.444 Commands Supported & Effects Log Page: Not Supported 00:19:35.444 Feature Identifiers & Effects Log Page:May Support 00:19:35.444 NVMe-MI Commands & Effects Log Page: May Support 00:19:35.444 Data Area 4 for Telemetry Log: Not Supported 00:19:35.444 Error Log Page Entries Supported: 128 00:19:35.444 Keep Alive: Supported 00:19:35.444 Keep Alive Granularity: 1000 ms 00:19:35.444 00:19:35.444 NVM Command Set Attributes 00:19:35.444 ========================== 00:19:35.444 Submission Queue Entry Size 00:19:35.444 Max: 64 00:19:35.444 Min: 64 00:19:35.444 Completion Queue Entry Size 00:19:35.444 Max: 16 00:19:35.444 Min: 16 00:19:35.444 Number of Namespaces: 1024 00:19:35.444 Compare Command: Not Supported 00:19:35.444 Write Uncorrectable Command: Not Supported 00:19:35.444 Dataset Management Command: Supported 00:19:35.444 Write Zeroes Command: Supported 00:19:35.444 Set Features Save Field: Not Supported 00:19:35.444 Reservations: Not Supported 00:19:35.444 Timestamp: Not Supported 00:19:35.444 Copy: Not Supported 00:19:35.444 Volatile Write Cache: Present 00:19:35.444 Atomic Write Unit (Normal): 1 00:19:35.444 Atomic Write Unit (PFail): 1 00:19:35.444 Atomic Compare & Write Unit: 1 00:19:35.444 Fused Compare & Write: Not Supported 00:19:35.444 Scatter-Gather List 00:19:35.444 SGL Command Set: Supported 00:19:35.444 SGL Keyed: Supported 00:19:35.444 SGL Bit Bucket Descriptor: Not Supported 00:19:35.444 SGL Metadata Pointer: Not Supported 00:19:35.444 Oversized SGL: Not Supported 00:19:35.444 SGL Metadata Address: Not Supported 00:19:35.444 SGL Offset: Supported 00:19:35.444 Transport SGL Data Block: Not Supported 00:19:35.444 Replay Protected Memory Block: Not Supported 00:19:35.444 00:19:35.444 Firmware Slot Information 00:19:35.444 ========================= 00:19:35.444 Active slot: 0 00:19:35.444 00:19:35.444 Asymmetric Namespace Access 00:19:35.444 =========================== 00:19:35.444 Change Count : 0 00:19:35.444 Number of ANA Group Descriptors : 1 00:19:35.444 ANA Group Descriptor : 0 00:19:35.444 ANA Group ID : 1 00:19:35.444 Number of NSID Values : 1 00:19:35.444 Change Count : 0 00:19:35.444 ANA State : 1 00:19:35.444 Namespace Identifier : 1 00:19:35.444 00:19:35.444 Commands Supported and Effects 00:19:35.444 ============================== 00:19:35.444 Admin Commands 00:19:35.444 -------------- 00:19:35.444 Get Log Page (02h): Supported 00:19:35.444 Identify (06h): Supported 00:19:35.444 Abort (08h): Supported 00:19:35.444 Set Features (09h): Supported 00:19:35.444 Get Features (0Ah): Supported 00:19:35.444 Asynchronous Event Request (0Ch): Supported 00:19:35.444 Keep Alive (18h): Supported 00:19:35.444 I/O Commands 00:19:35.444 ------------ 00:19:35.444 Flush (00h): Supported 00:19:35.444 Write (01h): Supported LBA-Change 00:19:35.444 Read (02h): Supported 00:19:35.444 Write Zeroes (08h): Supported LBA-Change 00:19:35.444 Dataset Management (09h): Supported 00:19:35.444 00:19:35.444 Error Log 00:19:35.444 ========= 00:19:35.444 Entry: 0 00:19:35.444 Error Count: 0x3 00:19:35.444 Submission Queue Id: 0x0 00:19:35.444 Command Id: 0x5 00:19:35.444 Phase Bit: 0 00:19:35.444 Status Code: 0x2 00:19:35.444 Status Code Type: 0x0 00:19:35.444 Do Not Retry: 1 00:19:35.444 Error Location: 0x28 00:19:35.444 LBA: 0x0 00:19:35.444 Namespace: 0x0 00:19:35.444 Vendor Log Page: 0x0 00:19:35.444 ----------- 00:19:35.444 Entry: 1 00:19:35.444 Error Count: 0x2 00:19:35.444 Submission Queue Id: 0x0 00:19:35.444 Command Id: 0x5 00:19:35.444 Phase Bit: 0 00:19:35.444 Status Code: 0x2 00:19:35.444 Status Code Type: 0x0 00:19:35.444 Do Not Retry: 1 00:19:35.444 Error Location: 0x28 00:19:35.444 LBA: 0x0 00:19:35.444 Namespace: 0x0 00:19:35.444 Vendor Log Page: 0x0 00:19:35.444 ----------- 00:19:35.444 Entry: 2 00:19:35.444 Error Count: 0x1 00:19:35.444 Submission Queue Id: 0x0 00:19:35.444 Command Id: 0x0 00:19:35.444 Phase Bit: 0 00:19:35.444 Status Code: 0x2 00:19:35.444 Status Code Type: 0x0 00:19:35.444 Do Not Retry: 1 00:19:35.444 Error Location: 0x28 00:19:35.444 LBA: 0x0 00:19:35.444 Namespace: 0x0 00:19:35.444 Vendor Log Page: 0x0 00:19:35.444 00:19:35.444 Number of Queues 00:19:35.444 ================ 00:19:35.444 Number of I/O Submission Queues: 128 00:19:35.444 Number of I/O Completion Queues: 128 00:19:35.444 00:19:35.444 ZNS Specific Controller Data 00:19:35.444 ============================ 00:19:35.444 Zone Append Size Limit: 0 00:19:35.444 00:19:35.444 00:19:35.444 Active Namespaces 00:19:35.444 ================= 00:19:35.444 get_feature(0x05) failed 00:19:35.444 Namespace ID:1 00:19:35.444 Command Set Identifier: NVM (00h) 00:19:35.444 Deallocate: Supported 00:19:35.444 Deallocated/Unwritten Error: Not Supported 00:19:35.444 Deallocated Read Value: Unknown 00:19:35.444 Deallocate in Write Zeroes: Not Supported 00:19:35.444 Deallocated Guard Field: 0xFFFF 00:19:35.444 Flush: Supported 00:19:35.444 Reservation: Not Supported 00:19:35.444 Namespace Sharing Capabilities: Multiple Controllers 00:19:35.444 Size (in LBAs): 3125627568 (1490GiB) 00:19:35.444 Capacity (in LBAs): 3125627568 (1490GiB) 00:19:35.444 Utilization (in LBAs): 3125627568 (1490GiB) 00:19:35.444 UUID: 9df76bdd-4197-4d4b-9bd6-5654b0155176 00:19:35.444 Thin Provisioning: Not Supported 00:19:35.444 Per-NS Atomic Units: Yes 00:19:35.444 Atomic Boundary Size (Normal): 0 00:19:35.444 Atomic Boundary Size (PFail): 0 00:19:35.444 Atomic Boundary Offset: 0 00:19:35.444 NGUID/EUI64 Never Reused: No 00:19:35.444 ANA group ID: 1 00:19:35.444 Namespace Write Protected: No 00:19:35.444 Number of LBA Formats: 1 00:19:35.444 Current LBA Format: LBA Format #00 00:19:35.444 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:35.444 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:35.444 rmmod nvme_rdma 00:19:35.444 rmmod nvme_fabrics 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:35.444 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:35.703 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:35.704 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:35.704 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:35.704 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:35.704 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:35.704 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:19:35.704 11:58:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:19:38.993 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:38.993 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:39.930 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:19:40.189 00:19:40.189 real 0m15.222s 00:19:40.189 user 0m4.432s 00:19:40.189 sys 0m8.634s 00:19:40.189 11:58:47 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.189 11:58:47 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.189 ************************************ 00:19:40.189 END TEST nvmf_identify_kernel_target 00:19:40.189 ************************************ 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.189 ************************************ 00:19:40.189 START TEST nvmf_auth_host 00:19:40.189 ************************************ 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:19:40.189 * Looking for test storage... 00:19:40.189 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.189 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:40.190 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:40.190 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.190 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:40.190 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.190 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.190 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.190 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:40.190 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.190 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.190 --rc genhtml_branch_coverage=1 00:19:40.190 --rc genhtml_function_coverage=1 00:19:40.190 --rc genhtml_legend=1 00:19:40.190 --rc geninfo_all_blocks=1 00:19:40.190 --rc geninfo_unexecuted_blocks=1 00:19:40.190 00:19:40.190 ' 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.449 --rc genhtml_branch_coverage=1 00:19:40.449 --rc genhtml_function_coverage=1 00:19:40.449 --rc genhtml_legend=1 00:19:40.449 --rc geninfo_all_blocks=1 00:19:40.449 --rc geninfo_unexecuted_blocks=1 00:19:40.449 00:19:40.449 ' 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.449 --rc genhtml_branch_coverage=1 00:19:40.449 --rc genhtml_function_coverage=1 00:19:40.449 --rc genhtml_legend=1 00:19:40.449 --rc geninfo_all_blocks=1 00:19:40.449 --rc geninfo_unexecuted_blocks=1 00:19:40.449 00:19:40.449 ' 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.449 --rc genhtml_branch_coverage=1 00:19:40.449 --rc genhtml_function_coverage=1 00:19:40.449 --rc genhtml_legend=1 00:19:40.449 --rc geninfo_all_blocks=1 00:19:40.449 --rc geninfo_unexecuted_blocks=1 00:19:40.449 00:19:40.449 ' 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.449 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.450 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.450 11:58:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.021 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.021 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.021 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.021 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.021 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.021 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:47.022 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:47.022 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:47.022 Found net devices under 0000:da:00.0: mlx_0_0 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:47.022 Found net devices under 0000:da:00.1: mlx_0_1 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:47.022 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:47.022 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:47.022 altname enp218s0f0np0 00:19:47.022 altname ens818f0np0 00:19:47.022 inet 192.168.100.8/24 scope global mlx_0_0 00:19:47.022 valid_lft forever preferred_lft forever 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:47.022 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:47.023 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:47.023 11:58:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:47.023 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:47.023 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:47.023 altname enp218s0f1np1 00:19:47.023 altname ens818f1np1 00:19:47.023 inet 192.168.100.9/24 scope global mlx_0_1 00:19:47.023 valid_lft forever preferred_lft forever 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:47.023 192.168.100.9' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:47.023 192.168.100.9' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:47.023 192.168.100.9' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3299738 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3299738 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3299738 ']' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2368e47238fec7dc9df905ceade1e0ec 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Jqc 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2368e47238fec7dc9df905ceade1e0ec 0 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2368e47238fec7dc9df905ceade1e0ec 0 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2368e47238fec7dc9df905ceade1e0ec 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Jqc 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Jqc 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Jqc 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=50ba0d4e8801f65c4e0a92962cd095a40294a8fe80d9ad547e45763dad7aab24 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Em7 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 50ba0d4e8801f65c4e0a92962cd095a40294a8fe80d9ad547e45763dad7aab24 3 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 50ba0d4e8801f65c4e0a92962cd095a40294a8fe80d9ad547e45763dad7aab24 3 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=50ba0d4e8801f65c4e0a92962cd095a40294a8fe80d9ad547e45763dad7aab24 00:19:47.023 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Em7 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Em7 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Em7 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3d244529ad0e961d2b93627189e5154efc3226a79a5f642e 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QF8 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3d244529ad0e961d2b93627189e5154efc3226a79a5f642e 0 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3d244529ad0e961d2b93627189e5154efc3226a79a5f642e 0 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3d244529ad0e961d2b93627189e5154efc3226a79a5f642e 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QF8 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QF8 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.QF8 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f711bfca0ca98b5bd823459f01964d8853b525c712f00106 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GCb 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f711bfca0ca98b5bd823459f01964d8853b525c712f00106 2 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f711bfca0ca98b5bd823459f01964d8853b525c712f00106 2 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f711bfca0ca98b5bd823459f01964d8853b525c712f00106 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GCb 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GCb 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.GCb 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=da2115dae7ceef3614f607484a96bc4b 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rNZ 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key da2115dae7ceef3614f607484a96bc4b 1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 da2115dae7ceef3614f607484a96bc4b 1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=da2115dae7ceef3614f607484a96bc4b 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rNZ 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rNZ 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.rNZ 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aa38aab643b31d5d2f49b4f886be4377 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uJK 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aa38aab643b31d5d2f49b4f886be4377 1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aa38aab643b31d5d2f49b4f886be4377 1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aa38aab643b31d5d2f49b4f886be4377 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uJK 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uJK 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uJK 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a61fc78ead2d9a47c168b27e138141be79c290ca8b3f2ef1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LZB 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a61fc78ead2d9a47c168b27e138141be79c290ca8b3f2ef1 2 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a61fc78ead2d9a47c168b27e138141be79c290ca8b3f2ef1 2 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a61fc78ead2d9a47c168b27e138141be79c290ca8b3f2ef1 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LZB 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LZB 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.LZB 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b7386e88f8072f4602edf52e3da8aa74 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3Ui 00:19:47.024 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b7386e88f8072f4602edf52e3da8aa74 0 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b7386e88f8072f4602edf52e3da8aa74 0 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b7386e88f8072f4602edf52e3da8aa74 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3Ui 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3Ui 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3Ui 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7485f94cbc04cd30319f01a0a8614c9cff3bef8f58ad1e8d24218966891ecf73 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.A2o 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7485f94cbc04cd30319f01a0a8614c9cff3bef8f58ad1e8d24218966891ecf73 3 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7485f94cbc04cd30319f01a0a8614c9cff3bef8f58ad1e8d24218966891ecf73 3 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7485f94cbc04cd30319f01a0a8614c9cff3bef8f58ad1e8d24218966891ecf73 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.A2o 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.A2o 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.A2o 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3299738 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3299738 ']' 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.025 11:58:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.283 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.283 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Jqc 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Em7 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Em7 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.QF8 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.GCb ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GCb 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.rNZ 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uJK ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uJK 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.LZB 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3Ui ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3Ui 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.A2o 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:47.284 11:58:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:19:49.813 Waiting for block devices as requested 00:19:50.071 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:19:50.071 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:50.329 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:50.329 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:50.329 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:50.329 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:50.589 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:50.589 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:50.589 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:50.589 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:50.849 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:50.849 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:50.849 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:50.849 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:51.107 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:51.107 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:51.107 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:51.673 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:51.673 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:51.673 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:51.673 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:51.673 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:51.673 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:51.673 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:51.673 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:51.673 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:51.931 No valid GPT data, bailing 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:19:51.931 00:19:51.931 Discovery Log Number of Records 2, Generation counter 2 00:19:51.931 =====Discovery Log Entry 0====== 00:19:51.931 trtype: rdma 00:19:51.931 adrfam: ipv4 00:19:51.931 subtype: current discovery subsystem 00:19:51.931 treq: not specified, sq flow control disable supported 00:19:51.931 portid: 1 00:19:51.931 trsvcid: 4420 00:19:51.931 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:51.931 traddr: 192.168.100.8 00:19:51.931 eflags: none 00:19:51.931 rdma_prtype: not specified 00:19:51.931 rdma_qptype: connected 00:19:51.931 rdma_cms: rdma-cm 00:19:51.931 rdma_pkey: 0x0000 00:19:51.931 =====Discovery Log Entry 1====== 00:19:51.931 trtype: rdma 00:19:51.931 adrfam: ipv4 00:19:51.931 subtype: nvme subsystem 00:19:51.931 treq: not specified, sq flow control disable supported 00:19:51.931 portid: 1 00:19:51.931 trsvcid: 4420 00:19:51.931 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:51.931 traddr: 192.168.100.8 00:19:51.931 eflags: none 00:19:51.931 rdma_prtype: not specified 00:19:51.931 rdma_qptype: connected 00:19:51.931 rdma_cms: rdma-cm 00:19:51.931 rdma_pkey: 0x0000 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.931 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.932 11:58:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 nvme0n1 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.447 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.448 nvme0n1 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.448 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.706 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.965 nvme0n1 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.965 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.966 11:59:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.225 nvme0n1 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.225 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.484 nvme0n1 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.484 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.743 nvme0n1 00:19:53.743 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.743 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.743 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.743 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.743 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.744 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.003 nvme0n1 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.003 11:59:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.003 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.262 nvme0n1 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.262 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.521 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.522 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.781 nvme0n1 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.781 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.040 nvme0n1 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:55.041 11:59:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:55.041 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:55.041 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:55.041 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:55.041 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.041 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.299 nvme0n1 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.299 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.300 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.867 nvme0n1 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.867 11:59:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.126 nvme0n1 00:19:56.126 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.126 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.126 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.126 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.126 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.127 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.386 nvme0n1 00:19:56.386 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.386 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.386 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.386 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.386 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.644 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.903 nvme0n1 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:56.903 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:56.904 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:56.904 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.904 11:59:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.162 nvme0n1 00:19:57.162 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.162 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.162 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.162 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.162 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.162 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.421 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.679 nvme0n1 00:19:57.679 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.679 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.679 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.679 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.679 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.679 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.937 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.938 11:59:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.196 nvme0n1 00:19:58.196 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.196 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.196 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.196 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.196 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.196 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.454 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.713 nvme0n1 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.713 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.972 11:59:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.231 nvme0n1 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.231 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:59.490 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:59.491 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:59.491 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:59.491 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:59.491 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.491 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.749 nvme0n1 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.749 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.007 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.007 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:00.007 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.008 11:59:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.575 nvme0n1 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.575 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.576 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:00.576 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:00.576 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:00.576 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:00.576 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:00.576 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.576 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.576 11:59:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.143 nvme0n1 00:20:01.143 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.143 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.143 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.143 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.143 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.401 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.401 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.401 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.402 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.970 nvme0n1 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.970 11:59:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.970 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.907 nvme0n1 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.907 11:59:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.473 nvme0n1 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:03.473 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.474 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.733 nvme0n1 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.733 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.992 nvme0n1 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.992 11:59:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.992 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.250 nvme0n1 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:04.250 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.509 nvme0n1 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.509 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:04.768 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.769 nvme0n1 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.769 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.028 11:59:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.287 nvme0n1 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.287 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.547 nvme0n1 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.547 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.806 nvme0n1 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.806 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:05.807 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.807 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.807 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.807 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.807 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.066 11:59:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.066 nvme0n1 00:20:06.066 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.066 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.066 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.066 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.066 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.066 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.325 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.584 nvme0n1 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.584 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.843 nvme0n1 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:06.843 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.844 11:59:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.410 nvme0n1 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.410 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.411 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.670 nvme0n1 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.670 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.929 nvme0n1 00:20:07.929 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.929 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.929 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.929 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.929 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.187 11:59:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.188 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.446 nvme0n1 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:08.446 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.447 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.014 nvme0n1 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.014 11:59:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.585 nvme0n1 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.585 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.586 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.153 nvme0n1 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.153 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.154 11:59:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.154 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.412 nvme0n1 00:20:10.412 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.412 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.412 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.412 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.412 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.412 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.670 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.670 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.671 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.929 nvme0n1 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.929 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.188 11:59:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.188 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.756 nvme0n1 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.756 11:59:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.323 nvme0n1 00:20:12.323 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.323 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.323 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.323 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.323 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.323 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.581 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.582 11:59:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.149 nvme0n1 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.149 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.084 nvme0n1 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.084 11:59:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.652 nvme0n1 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.652 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.911 nvme0n1 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.911 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.912 11:59:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.170 nvme0n1 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.171 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.429 nvme0n1 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.429 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.430 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.688 nvme0n1 00:20:15.688 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.688 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.688 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.688 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.689 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.947 nvme0n1 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.947 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.948 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.948 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.948 11:59:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.207 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.466 nvme0n1 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.466 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.725 nvme0n1 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.725 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.984 nvme0n1 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.984 11:59:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.242 nvme0n1 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.242 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.500 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.759 nvme0n1 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.759 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.760 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.018 nvme0n1 00:20:18.018 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.018 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.018 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.018 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.018 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.018 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.019 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.019 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.019 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.019 11:59:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.019 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.277 nvme0n1 00:20:18.277 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.277 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.277 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.536 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.795 nvme0n1 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.795 11:59:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.363 nvme0n1 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.363 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.621 nvme0n1 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:19.621 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.622 11:59:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 nvme0n1 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.188 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.756 nvme0n1 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.756 11:59:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.015 nvme0n1 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.015 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.273 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.273 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.273 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:21.273 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.273 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.273 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.274 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.532 nvme0n1 00:20:21.532 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.532 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.532 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.532 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.533 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:21.791 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:21.792 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.792 11:59:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.050 nvme0n1 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM2OGU0NzIzOGZlYzdkYzlkZjkwNWNlYWRlMWUwZWOpWHdF: 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: ]] 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBiYTBkNGU4ODAxZjY1YzRlMGE5Mjk2MmNkMDk1YTQwMjk0YThmZTgwZDlhZDU0N2U0NTc2M2RhZDdhYWIyNDfI1RU=: 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.050 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.309 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.875 nvme0n1 00:20:22.875 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.875 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.875 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.875 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.875 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.875 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.875 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.875 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.875 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.876 11:59:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.443 nvme0n1 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.443 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:23.701 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:23.702 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.702 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.702 11:59:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.268 nvme0n1 00:20:24.268 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.268 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.268 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.268 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.268 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.268 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZmM3OGVhZDJkOWE0N2MxNjhiMjdlMTM4MTQxYmU3OWMyOTBjYThiM2YyZWYxlhKwcg==: 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: ]] 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjczODZlODhmODA3MmY0NjAyZWRmNTJlM2RhOGFhNzR3wj08: 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.269 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.836 nvme0n1 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:24.836 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQ4NWY5NGNiYzA0Y2QzMDMxOWYwMWEwYTg2MTRjOWNmZjNiZWY4ZjU4YWQxZThkMjQyMTg5NjY4OTFlY2Y3M0d1ovE=: 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.837 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.096 11:59:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.663 nvme0n1 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.663 request: 00:20:25.663 { 00:20:25.663 "name": "nvme0", 00:20:25.663 "trtype": "rdma", 00:20:25.663 "traddr": "192.168.100.8", 00:20:25.663 "adrfam": "ipv4", 00:20:25.663 "trsvcid": "4420", 00:20:25.663 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:25.663 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:25.663 "prchk_reftag": false, 00:20:25.663 "prchk_guard": false, 00:20:25.663 "hdgst": false, 00:20:25.663 "ddgst": false, 00:20:25.663 "allow_unrecognized_csi": false, 00:20:25.663 "method": "bdev_nvme_attach_controller", 00:20:25.663 "req_id": 1 00:20:25.663 } 00:20:25.663 Got JSON-RPC error response 00:20:25.663 response: 00:20:25.663 { 00:20:25.663 "code": -5, 00:20:25.663 "message": "Input/output error" 00:20:25.663 } 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.663 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.922 request: 00:20:25.922 { 00:20:25.922 "name": "nvme0", 00:20:25.922 "trtype": "rdma", 00:20:25.922 "traddr": "192.168.100.8", 00:20:25.922 "adrfam": "ipv4", 00:20:25.922 "trsvcid": "4420", 00:20:25.922 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:25.922 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:25.922 "prchk_reftag": false, 00:20:25.922 "prchk_guard": false, 00:20:25.922 "hdgst": false, 00:20:25.922 "ddgst": false, 00:20:25.922 "dhchap_key": "key2", 00:20:25.922 "allow_unrecognized_csi": false, 00:20:25.922 "method": "bdev_nvme_attach_controller", 00:20:25.922 "req_id": 1 00:20:25.922 } 00:20:25.922 Got JSON-RPC error response 00:20:25.922 response: 00:20:25.922 { 00:20:25.922 "code": -5, 00:20:25.922 "message": "Input/output error" 00:20:25.922 } 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.922 11:59:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.181 request: 00:20:26.181 { 00:20:26.181 "name": "nvme0", 00:20:26.181 "trtype": "rdma", 00:20:26.181 "traddr": "192.168.100.8", 00:20:26.181 "adrfam": "ipv4", 00:20:26.181 "trsvcid": "4420", 00:20:26.181 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:26.181 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:26.181 "prchk_reftag": false, 00:20:26.181 "prchk_guard": false, 00:20:26.181 "hdgst": false, 00:20:26.181 "ddgst": false, 00:20:26.181 "dhchap_key": "key1", 00:20:26.181 "dhchap_ctrlr_key": "ckey2", 00:20:26.181 "allow_unrecognized_csi": false, 00:20:26.181 "method": "bdev_nvme_attach_controller", 00:20:26.181 "req_id": 1 00:20:26.181 } 00:20:26.181 Got JSON-RPC error response 00:20:26.181 response: 00:20:26.181 { 00:20:26.181 "code": -5, 00:20:26.181 "message": "Input/output error" 00:20:26.181 } 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.181 nvme0n1 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.181 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.440 request: 00:20:26.440 { 00:20:26.440 "name": "nvme0", 00:20:26.440 "dhchap_key": "key1", 00:20:26.440 "dhchap_ctrlr_key": "ckey2", 00:20:26.440 "method": "bdev_nvme_set_keys", 00:20:26.440 "req_id": 1 00:20:26.440 } 00:20:26.440 Got JSON-RPC error response 00:20:26.440 response: 00:20:26.440 { 00:20:26.440 "code": -13, 00:20:26.440 "message": "Permission denied" 00:20:26.440 } 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.440 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.441 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.441 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:26.441 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.441 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.441 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.441 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:26.441 11:59:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:27.816 11:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.816 11:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:27.816 11:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.816 11:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.816 11:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.816 11:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:27.816 11:59:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QyNDQ1MjlhZDBlOTYxZDJiOTM2MjcxODllNTE1NGVmYzMyMjZhNzlhNWY2NDJlvTTkQw==: 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: ]] 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcxMWJmY2EwY2E5OGI1YmQ4MjM0NTlmMDE5NjRkODg1M2I1MjVjNzEyZjAwMTA2FHGrTA==: 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.752 nvme0n1 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:28.752 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGEyMTE1ZGFlN2NlZWYzNjE0ZjYwNzQ4NGE5NmJjNGI61ewZ: 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: ]] 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWEzOGFhYjY0M2IzMWQ1ZDJmNDliNGY4ODZiZTQzNzfE1Lto: 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.753 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.011 request: 00:20:29.011 { 00:20:29.011 "name": "nvme0", 00:20:29.011 "dhchap_key": "key2", 00:20:29.012 "dhchap_ctrlr_key": "ckey1", 00:20:29.012 "method": "bdev_nvme_set_keys", 00:20:29.012 "req_id": 1 00:20:29.012 } 00:20:29.012 Got JSON-RPC error response 00:20:29.012 response: 00:20:29.012 { 00:20:29.012 "code": -13, 00:20:29.012 "message": "Permission denied" 00:20:29.012 } 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:29.012 11:59:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:29.947 11:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.947 11:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:29.947 11:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.947 11:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.947 11:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.947 11:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:29.947 11:59:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:30.882 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.882 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:30.882 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.882 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.882 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:31.141 rmmod nvme_rdma 00:20:31.141 rmmod nvme_fabrics 00:20:31.141 11:59:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3299738 ']' 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3299738 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3299738 ']' 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3299738 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3299738 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3299738' 00:20:31.141 killing process with pid 3299738 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3299738 00:20:31.141 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3299738 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:20:31.400 11:59:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:34.688 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:34.688 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:35.624 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:20:35.883 11:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Jqc /tmp/spdk.key-null.QF8 /tmp/spdk.key-sha256.rNZ /tmp/spdk.key-sha384.LZB /tmp/spdk.key-sha512.A2o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:20:35.883 11:59:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:39.172 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:39.172 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:20:39.172 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:20:39.172 00:20:39.172 real 0m58.593s 00:20:39.172 user 0m54.112s 00:20:39.172 sys 0m12.900s 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.172 ************************************ 00:20:39.172 END TEST nvmf_auth_host 00:20:39.172 ************************************ 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.172 ************************************ 00:20:39.172 START TEST nvmf_bdevperf 00:20:39.172 ************************************ 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:39.172 * Looking for test storage... 00:20:39.172 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:39.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.172 --rc genhtml_branch_coverage=1 00:20:39.172 --rc genhtml_function_coverage=1 00:20:39.172 --rc genhtml_legend=1 00:20:39.172 --rc geninfo_all_blocks=1 00:20:39.172 --rc geninfo_unexecuted_blocks=1 00:20:39.172 00:20:39.172 ' 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:39.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.172 --rc genhtml_branch_coverage=1 00:20:39.172 --rc genhtml_function_coverage=1 00:20:39.172 --rc genhtml_legend=1 00:20:39.172 --rc geninfo_all_blocks=1 00:20:39.172 --rc geninfo_unexecuted_blocks=1 00:20:39.172 00:20:39.172 ' 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:39.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.172 --rc genhtml_branch_coverage=1 00:20:39.172 --rc genhtml_function_coverage=1 00:20:39.172 --rc genhtml_legend=1 00:20:39.172 --rc geninfo_all_blocks=1 00:20:39.172 --rc geninfo_unexecuted_blocks=1 00:20:39.172 00:20:39.172 ' 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:39.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.172 --rc genhtml_branch_coverage=1 00:20:39.172 --rc genhtml_function_coverage=1 00:20:39.172 --rc genhtml_legend=1 00:20:39.172 --rc geninfo_all_blocks=1 00:20:39.172 --rc geninfo_unexecuted_blocks=1 00:20:39.172 00:20:39.172 ' 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.172 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.173 11:59:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:45.742 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:45.742 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:45.742 Found net devices under 0000:da:00.0: mlx_0_0 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:45.742 Found net devices under 0000:da:00.1: mlx_0_1 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:45.742 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:45.743 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:45.743 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:20:45.743 altname enp218s0f0np0 00:20:45.743 altname ens818f0np0 00:20:45.743 inet 192.168.100.8/24 scope global mlx_0_0 00:20:45.743 valid_lft forever preferred_lft forever 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:45.743 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:45.743 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:20:45.743 altname enp218s0f1np1 00:20:45.743 altname ens818f1np1 00:20:45.743 inet 192.168.100.9/24 scope global mlx_0_1 00:20:45.743 valid_lft forever preferred_lft forever 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:45.743 192.168.100.9' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:45.743 192.168.100.9' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:45.743 192.168.100.9' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3314380 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3314380 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3314380 ']' 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.743 11:59:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.744 [2024-12-09 11:59:52.893283] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:20:45.744 [2024-12-09 11:59:52.893326] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.744 [2024-12-09 11:59:52.970441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:45.744 [2024-12-09 11:59:53.012835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.744 [2024-12-09 11:59:53.012872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.744 [2024-12-09 11:59:53.012880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.744 [2024-12-09 11:59:53.012886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.744 [2024-12-09 11:59:53.012891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.744 [2024-12-09 11:59:53.015830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.744 [2024-12-09 11:59:53.015918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.744 [2024-12-09 11:59:53.015919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.744 [2024-12-09 11:59:53.184415] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcdc080/0xce0570) succeed. 00:20:45.744 [2024-12-09 11:59:53.195550] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcdd670/0xd21c10) succeed. 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.744 Malloc0 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.744 [2024-12-09 11:59:53.346771] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.744 { 00:20:45.744 "params": { 00:20:45.744 "name": "Nvme$subsystem", 00:20:45.744 "trtype": "$TEST_TRANSPORT", 00:20:45.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.744 "adrfam": "ipv4", 00:20:45.744 "trsvcid": "$NVMF_PORT", 00:20:45.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.744 "hdgst": ${hdgst:-false}, 00:20:45.744 "ddgst": ${ddgst:-false} 00:20:45.744 }, 00:20:45.744 "method": "bdev_nvme_attach_controller" 00:20:45.744 } 00:20:45.744 EOF 00:20:45.744 )") 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:20:45.744 11:59:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:45.744 "params": { 00:20:45.744 "name": "Nvme1", 00:20:45.744 "trtype": "rdma", 00:20:45.744 "traddr": "192.168.100.8", 00:20:45.744 "adrfam": "ipv4", 00:20:45.744 "trsvcid": "4420", 00:20:45.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.744 "hdgst": false, 00:20:45.744 "ddgst": false 00:20:45.744 }, 00:20:45.744 "method": "bdev_nvme_attach_controller" 00:20:45.744 }' 00:20:45.744 [2024-12-09 11:59:53.394718] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:20:45.744 [2024-12-09 11:59:53.394756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314408 ] 00:20:45.744 [2024-12-09 11:59:53.470915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.744 [2024-12-09 11:59:53.512039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.744 Running I/O for 1 seconds... 00:20:46.678 17280.00 IOPS, 67.50 MiB/s 00:20:46.678 Latency(us) 00:20:46.678 [2024-12-09T10:59:54.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.678 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:46.678 Verification LBA range: start 0x0 length 0x4000 00:20:46.678 Nvme1n1 : 1.01 17319.81 67.66 0.00 0.00 7349.37 153.11 10735.42 00:20:46.678 [2024-12-09T10:59:54.731Z] =================================================================================================================== 00:20:46.678 [2024-12-09T10:59:54.731Z] Total : 17319.81 67.66 0.00 0.00 7349.37 153.11 10735.42 00:20:46.936 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3314644 00:20:46.936 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:20:46.936 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:46.936 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:46.936 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:20:46.936 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:20:46.936 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.937 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.937 { 00:20:46.937 "params": { 00:20:46.937 "name": "Nvme$subsystem", 00:20:46.937 "trtype": "$TEST_TRANSPORT", 00:20:46.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.937 "adrfam": "ipv4", 00:20:46.937 "trsvcid": "$NVMF_PORT", 00:20:46.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.937 "hdgst": ${hdgst:-false}, 00:20:46.937 "ddgst": ${ddgst:-false} 00:20:46.937 }, 00:20:46.937 "method": "bdev_nvme_attach_controller" 00:20:46.937 } 00:20:46.937 EOF 00:20:46.937 )") 00:20:46.937 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:20:46.937 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:20:46.937 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:20:46.937 11:59:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:46.937 "params": { 00:20:46.937 "name": "Nvme1", 00:20:46.937 "trtype": "rdma", 00:20:46.937 "traddr": "192.168.100.8", 00:20:46.937 "adrfam": "ipv4", 00:20:46.937 "trsvcid": "4420", 00:20:46.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.937 "hdgst": false, 00:20:46.937 "ddgst": false 00:20:46.937 }, 00:20:46.937 "method": "bdev_nvme_attach_controller" 00:20:46.937 }' 00:20:46.937 [2024-12-09 11:59:54.924000] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:20:46.937 [2024-12-09 11:59:54.924054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314644 ] 00:20:47.193 [2024-12-09 11:59:55.003274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.193 [2024-12-09 11:59:55.041415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.193 Running I/O for 15 seconds... 00:20:49.498 17346.00 IOPS, 67.76 MiB/s [2024-12-09T10:59:58.118Z] 17425.00 IOPS, 68.07 MiB/s [2024-12-09T10:59:58.118Z] 11:59:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3314380 00:20:50.065 11:59:57 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:20:50.892 15445.33 IOPS, 60.33 MiB/s [2024-12-09T10:59:58.945Z] [2024-12-09 11:59:58.920323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.892 [2024-12-09 11:59:58.920359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.892 [2024-12-09 11:59:58.920377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.892 [2024-12-09 11:59:58.920384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.892 [2024-12-09 11:59:58.920394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.892 [2024-12-09 11:59:58.920401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.920991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.893 [2024-12-09 11:59:58.920998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.893 [2024-12-09 11:59:58.921005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.894 [2024-12-09 11:59:58.921582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.894 [2024-12-09 11:59:58.921588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.895 [2024-12-09 11:59:58.921764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.921990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.921997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183700 00:20:50.895 [2024-12-09 11:59:58.922143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.895 [2024-12-09 11:59:58.922152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183700 00:20:50.896 [2024-12-09 11:59:58.922159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.896 [2024-12-09 11:59:58.922167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183700 00:20:50.896 [2024-12-09 11:59:58.922173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.896 [2024-12-09 11:59:58.922182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x183700 00:20:50.896 [2024-12-09 11:59:58.922188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.896 [2024-12-09 11:59:58.922196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183700 00:20:50.896 [2024-12-09 11:59:58.922202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.896 [2024-12-09 11:59:58.922210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183700 00:20:50.896 [2024-12-09 11:59:58.922217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.896 [2024-12-09 11:59:58.922225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183700 00:20:50.896 [2024-12-09 11:59:58.922232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.896 [2024-12-09 11:59:58.922240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183700 00:20:50.896 [2024-12-09 11:59:58.922246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7a04000 sqhd:7210 p:0 m:0 dnr:0 00:20:50.896 [2024-12-09 11:59:58.924098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:50.896 [2024-12-09 11:59:58.924112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:50.896 [2024-12-09 11:59:58.924120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108800 len:8 PRP1 0x0 PRP2 0x0 00:20:50.896 [2024-12-09 11:59:58.924127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.896 [2024-12-09 11:59:58.926944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:50.896 [2024-12-09 11:59:58.941633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.154 [2024-12-09 11:59:58.944480] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.154 [2024-12-09 11:59:58.944500] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.154 [2024-12-09 11:59:58.944506] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:20:51.981 11584.00 IOPS, 45.25 MiB/s [2024-12-09T11:00:00.034Z] [2024-12-09 11:59:59.948554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.981 [2024-12-09 11:59:59.948615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:51.981 [2024-12-09 11:59:59.949213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:51.981 [2024-12-09 11:59:59.949242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:51.981 [2024-12-09 11:59:59.949265] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:20:51.981 [2024-12-09 11:59:59.949290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:51.981 [2024-12-09 11:59:59.956465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:51.981 [2024-12-09 11:59:59.959327] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.981 [2024-12-09 11:59:59.959346] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.981 [2024-12-09 11:59:59.959352] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:20:53.174 9267.20 IOPS, 36.20 MiB/s [2024-12-09T11:00:01.227Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3314380 Killed "${NVMF_APP[@]}" "$@" 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3315695 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3315695 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3315695 ']' 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.174 12:00:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.174 [2024-12-09 12:00:00.942533] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:20:53.174 [2024-12-09 12:00:00.942584] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.174 [2024-12-09 12:00:00.963240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:53.174 [2024-12-09 12:00:00.963271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:53.174 [2024-12-09 12:00:00.963450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:53.174 [2024-12-09 12:00:00.963460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:53.175 [2024-12-09 12:00:00.963469] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:20:53.175 [2024-12-09 12:00:00.963483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:53.175 [2024-12-09 12:00:00.970022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:53.175 [2024-12-09 12:00:00.972727] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:53.175 [2024-12-09 12:00:00.972750] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:53.175 [2024-12-09 12:00:00.972757] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:20:53.175 [2024-12-09 12:00:01.023607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:53.175 [2024-12-09 12:00:01.064856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.175 [2024-12-09 12:00:01.064894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.175 [2024-12-09 12:00:01.064901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.175 [2024-12-09 12:00:01.064908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.175 [2024-12-09 12:00:01.064913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.175 [2024-12-09 12:00:01.066245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.175 [2024-12-09 12:00:01.066359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.175 [2024-12-09 12:00:01.066360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.175 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.175 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:53.175 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.175 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.175 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.175 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.175 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:53.175 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.175 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.433 [2024-12-09 12:00:01.229137] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1432080/0x1436570) succeed. 00:20:53.433 [2024-12-09 12:00:01.240574] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1433670/0x1477c10) succeed. 00:20:53.433 7722.67 IOPS, 30.17 MiB/s [2024-12-09T11:00:01.486Z] 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.433 Malloc0 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.433 [2024-12-09 12:00:01.392560] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.433 12:00:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3314644 00:20:53.998 [2024-12-09 12:00:01.976813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:53.998 [2024-12-09 12:00:01.976840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:53.998 [2024-12-09 12:00:01.977015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:53.998 [2024-12-09 12:00:01.977025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:53.998 [2024-12-09 12:00:01.977035] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:20:53.998 [2024-12-09 12:00:01.977046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:53.998 [2024-12-09 12:00:01.979981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:53.998 [2024-12-09 12:00:02.016738] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:20:55.630 7164.86 IOPS, 27.99 MiB/s [2024-12-09T11:00:04.616Z] 8429.00 IOPS, 32.93 MiB/s [2024-12-09T11:00:05.548Z] 9410.67 IOPS, 36.76 MiB/s [2024-12-09T11:00:06.482Z] 10196.00 IOPS, 39.83 MiB/s [2024-12-09T11:00:07.415Z] 10840.00 IOPS, 42.34 MiB/s [2024-12-09T11:00:08.348Z] 11375.33 IOPS, 44.43 MiB/s [2024-12-09T11:00:09.282Z] 11827.31 IOPS, 46.20 MiB/s [2024-12-09T11:00:10.654Z] 12216.00 IOPS, 47.72 MiB/s [2024-12-09T11:00:10.654Z] 12548.60 IOPS, 49.02 MiB/s 00:21:02.601 Latency(us) 00:21:02.601 [2024-12-09T11:00:10.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.601 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:02.601 Verification LBA range: start 0x0 length 0x4000 00:21:02.601 Nvme1n1 : 15.01 12548.18 49.02 10147.96 0.00 5620.49 460.31 1046578.71 00:21:02.601 [2024-12-09T11:00:10.654Z] =================================================================================================================== 00:21:02.601 [2024-12-09T11:00:10.654Z] Total : 12548.18 49.02 10147.96 0.00 5620.49 460.31 1046578.71 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:02.601 rmmod nvme_rdma 00:21:02.601 rmmod nvme_fabrics 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3315695 ']' 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3315695 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3315695 ']' 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3315695 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3315695 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3315695' 00:21:02.601 killing process with pid 3315695 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3315695 00:21:02.601 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3315695 00:21:02.859 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.859 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:02.859 00:21:02.859 real 0m24.092s 00:21:02.859 user 1m2.220s 00:21:02.859 sys 0m5.492s 00:21:02.859 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.859 12:00:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:02.859 ************************************ 00:21:02.859 END TEST nvmf_bdevperf 00:21:02.859 ************************************ 00:21:02.859 12:00:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:21:02.859 12:00:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:02.859 12:00:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.859 12:00:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.859 ************************************ 00:21:02.859 START TEST nvmf_target_disconnect 00:21:02.859 ************************************ 00:21:02.859 12:00:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:21:03.118 * Looking for test storage... 00:21:03.118 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:03.118 12:00:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:03.118 12:00:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:21:03.118 12:00:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:21:03.118 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:03.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.119 --rc genhtml_branch_coverage=1 00:21:03.119 --rc genhtml_function_coverage=1 00:21:03.119 --rc genhtml_legend=1 00:21:03.119 --rc geninfo_all_blocks=1 00:21:03.119 --rc geninfo_unexecuted_blocks=1 00:21:03.119 00:21:03.119 ' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:03.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.119 --rc genhtml_branch_coverage=1 00:21:03.119 --rc genhtml_function_coverage=1 00:21:03.119 --rc genhtml_legend=1 00:21:03.119 --rc geninfo_all_blocks=1 00:21:03.119 --rc geninfo_unexecuted_blocks=1 00:21:03.119 00:21:03.119 ' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:03.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.119 --rc genhtml_branch_coverage=1 00:21:03.119 --rc genhtml_function_coverage=1 00:21:03.119 --rc genhtml_legend=1 00:21:03.119 --rc geninfo_all_blocks=1 00:21:03.119 --rc geninfo_unexecuted_blocks=1 00:21:03.119 00:21:03.119 ' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:03.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.119 --rc genhtml_branch_coverage=1 00:21:03.119 --rc genhtml_function_coverage=1 00:21:03.119 --rc genhtml_legend=1 00:21:03.119 --rc geninfo_all_blocks=1 00:21:03.119 --rc geninfo_unexecuted_blocks=1 00:21:03.119 00:21:03.119 ' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:03.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:21:03.119 12:00:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:09.685 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:09.685 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:09.686 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:09.686 Found net devices under 0000:da:00.0: mlx_0_0 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:09.686 Found net devices under 0000:da:00.1: mlx_0_1 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:09.686 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:09.686 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:21:09.686 altname enp218s0f0np0 00:21:09.686 altname ens818f0np0 00:21:09.686 inet 192.168.100.8/24 scope global mlx_0_0 00:21:09.686 valid_lft forever preferred_lft forever 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:09.686 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:09.686 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:21:09.686 altname enp218s0f1np1 00:21:09.686 altname ens818f1np1 00:21:09.686 inet 192.168.100.9/24 scope global mlx_0_1 00:21:09.686 valid_lft forever preferred_lft forever 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:09.686 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:09.687 192.168.100.9' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:09.687 192.168.100.9' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:09.687 192.168.100.9' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.687 12:00:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:09.687 ************************************ 00:21:09.687 START TEST nvmf_target_disconnect_tc1 00:21:09.687 ************************************ 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:21:09.687 12:00:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:09.687 [2024-12-09 12:00:17.138168] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:09.687 [2024-12-09 12:00:17.138266] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:09.687 [2024-12-09 12:00:17.138290] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:21:10.255 [2024-12-09 12:00:18.142328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:21:10.255 [2024-12-09 12:00:18.142363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:21:10.255 [2024-12-09 12:00:18.142373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:21:10.255 [2024-12-09 12:00:18.142397] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:10.255 [2024-12-09 12:00:18.142405] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:21:10.255 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:21:10.255 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:21:10.255 Initializing NVMe Controllers 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.255 00:21:10.255 real 0m1.146s 00:21:10.255 user 0m0.929s 00:21:10.255 sys 0m0.201s 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:10.255 ************************************ 00:21:10.255 END TEST nvmf_target_disconnect_tc1 00:21:10.255 ************************************ 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:10.255 ************************************ 00:21:10.255 START TEST nvmf_target_disconnect_tc2 00:21:10.255 ************************************ 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3321217 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3321217 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3321217 ']' 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.255 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.256 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.256 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.256 [2024-12-09 12:00:18.282894] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:21:10.256 [2024-12-09 12:00:18.282935] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.515 [2024-12-09 12:00:18.360158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.515 [2024-12-09 12:00:18.402120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.515 [2024-12-09 12:00:18.402156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.515 [2024-12-09 12:00:18.402163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.515 [2024-12-09 12:00:18.402169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.515 [2024-12-09 12:00:18.402174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.515 [2024-12-09 12:00:18.403753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:10.515 [2024-12-09 12:00:18.403842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:10.515 [2024-12-09 12:00:18.403972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:21:10.515 [2024-12-09 12:00:18.403971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:10.515 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.515 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:10.515 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.515 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.515 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.515 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.515 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:10.515 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.515 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.774 Malloc0 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.774 [2024-12-09 12:00:18.606099] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24019f0/0x240d6d0) succeed. 00:21:10.774 [2024-12-09 12:00:18.617920] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2403080/0x244ed70) succeed. 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.774 [2024-12-09 12:00:18.762176] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:10.774 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.775 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.775 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.775 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3321261 00:21:10.775 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:21:10.775 12:00:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:13.302 12:00:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3321217 00:21:13.302 12:00:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Write completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 Read completed with error (sct=0, sc=8) 00:21:14.239 starting I/O failed 00:21:14.239 [2024-12-09 12:00:21.972915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:14.807 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3321217 Killed "${NVMF_APP[@]}" "$@" 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3321952 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3321952 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3321952 ']' 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.807 12:00:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.807 [2024-12-09 12:00:22.839834] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:21:14.807 [2024-12-09 12:00:22.839882] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.067 [2024-12-09 12:00:22.916610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.067 [2024-12-09 12:00:22.955885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.067 [2024-12-09 12:00:22.955922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.067 [2024-12-09 12:00:22.955930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.067 [2024-12-09 12:00:22.955936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.067 [2024-12-09 12:00:22.955941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.067 [2024-12-09 12:00:22.957461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:15.067 [2024-12-09 12:00:22.957569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:15.067 [2024-12-09 12:00:22.957602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:15.067 [2024-12-09 12:00:22.957603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Write completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 Read completed with error (sct=0, sc=8) 00:21:15.067 starting I/O failed 00:21:15.067 [2024-12-09 12:00:22.978193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.067 [2024-12-09 12:00:22.979863] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:15.067 [2024-12-09 12:00:22.979883] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:15.067 [2024-12-09 12:00:22.979890] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:15.067 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.067 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:15.067 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:15.067 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.067 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.067 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.067 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:15.067 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.067 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.326 Malloc0 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.326 [2024-12-09 12:00:23.174971] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9109f0/0x91c6d0) succeed. 00:21:15.326 [2024-12-09 12:00:23.186817] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x912080/0x95dd70) succeed. 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.326 [2024-12-09 12:00:23.331150] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:15.326 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.327 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.327 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.327 12:00:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3321261 00:21:16.264 [2024-12-09 12:00:23.984009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:23.997236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:23.997296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:23.997316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:23.997325] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:23.997332] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.007226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.017031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.017076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.017092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.017099] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.017106] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.027373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.037060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.037106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.037125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.037132] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.037139] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.047253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.057134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.057177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.057192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.057200] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.057206] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.067515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.077269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.077312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.077327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.077335] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.077342] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.087457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.097266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.097304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.097318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.097326] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.097332] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.107563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.117359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.117398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.117413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.117421] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.117431] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.127760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.137512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.137555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.137570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.137578] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.137584] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.147910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.157595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.157633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.157649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.157656] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.157663] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.167931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.177611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.177650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.177665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.177672] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.177678] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.188171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.197564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.264 [2024-12-09 12:00:24.197605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.264 [2024-12-09 12:00:24.197620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.264 [2024-12-09 12:00:24.197627] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.264 [2024-12-09 12:00:24.197634] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.264 [2024-12-09 12:00:24.208033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.264 qpair failed and we were unable to recover it. 00:21:16.264 [2024-12-09 12:00:24.217641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.265 [2024-12-09 12:00:24.217683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.265 [2024-12-09 12:00:24.217698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.265 [2024-12-09 12:00:24.217705] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.265 [2024-12-09 12:00:24.217711] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.265 [2024-12-09 12:00:24.228067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.265 qpair failed and we were unable to recover it. 00:21:16.265 [2024-12-09 12:00:24.237758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.265 [2024-12-09 12:00:24.237800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.265 [2024-12-09 12:00:24.237822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.265 [2024-12-09 12:00:24.237829] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.265 [2024-12-09 12:00:24.237835] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.265 [2024-12-09 12:00:24.248001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.265 qpair failed and we were unable to recover it. 00:21:16.265 [2024-12-09 12:00:24.257715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.265 [2024-12-09 12:00:24.257762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.265 [2024-12-09 12:00:24.257777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.265 [2024-12-09 12:00:24.257785] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.265 [2024-12-09 12:00:24.257791] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.265 [2024-12-09 12:00:24.268278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.265 qpair failed and we were unable to recover it. 00:21:16.265 [2024-12-09 12:00:24.277913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.265 [2024-12-09 12:00:24.277954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.265 [2024-12-09 12:00:24.277969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.265 [2024-12-09 12:00:24.277976] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.265 [2024-12-09 12:00:24.277982] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.265 [2024-12-09 12:00:24.288333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.265 qpair failed and we were unable to recover it. 00:21:16.265 [2024-12-09 12:00:24.297854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.265 [2024-12-09 12:00:24.297897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.265 [2024-12-09 12:00:24.297912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.265 [2024-12-09 12:00:24.297919] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.265 [2024-12-09 12:00:24.297926] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.265 [2024-12-09 12:00:24.308435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.265 qpair failed and we were unable to recover it. 00:21:16.524 [2024-12-09 12:00:24.318042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.524 [2024-12-09 12:00:24.318083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.524 [2024-12-09 12:00:24.318098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.524 [2024-12-09 12:00:24.318105] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.524 [2024-12-09 12:00:24.318112] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.524 [2024-12-09 12:00:24.328447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.524 qpair failed and we were unable to recover it. 00:21:16.524 [2024-12-09 12:00:24.338026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.524 [2024-12-09 12:00:24.338066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.524 [2024-12-09 12:00:24.338081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.524 [2024-12-09 12:00:24.338088] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.524 [2024-12-09 12:00:24.338095] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.524 [2024-12-09 12:00:24.348502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.524 qpair failed and we were unable to recover it. 00:21:16.524 [2024-12-09 12:00:24.358107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.524 [2024-12-09 12:00:24.358149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.524 [2024-12-09 12:00:24.358165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.524 [2024-12-09 12:00:24.358172] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.524 [2024-12-09 12:00:24.358178] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.524 [2024-12-09 12:00:24.368426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.524 qpair failed and we were unable to recover it. 00:21:16.524 [2024-12-09 12:00:24.378206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.524 [2024-12-09 12:00:24.378247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.524 [2024-12-09 12:00:24.378262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.524 [2024-12-09 12:00:24.378272] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.524 [2024-12-09 12:00:24.378279] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.524 [2024-12-09 12:00:24.388580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.524 qpair failed and we were unable to recover it. 00:21:16.524 [2024-12-09 12:00:24.398302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.524 [2024-12-09 12:00:24.398344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.524 [2024-12-09 12:00:24.398359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.524 [2024-12-09 12:00:24.398366] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.524 [2024-12-09 12:00:24.398373] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.524 [2024-12-09 12:00:24.408507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.524 qpair failed and we were unable to recover it. 00:21:16.524 [2024-12-09 12:00:24.418310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.524 [2024-12-09 12:00:24.418355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.524 [2024-12-09 12:00:24.418370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.524 [2024-12-09 12:00:24.418377] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.524 [2024-12-09 12:00:24.418384] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.524 [2024-12-09 12:00:24.428644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.524 qpair failed and we were unable to recover it. 00:21:16.524 [2024-12-09 12:00:24.438383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.524 [2024-12-09 12:00:24.438427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.524 [2024-12-09 12:00:24.438441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.524 [2024-12-09 12:00:24.438449] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.524 [2024-12-09 12:00:24.438455] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.524 [2024-12-09 12:00:24.448637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.524 qpair failed and we were unable to recover it. 00:21:16.524 [2024-12-09 12:00:24.458518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.524 [2024-12-09 12:00:24.458559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.524 [2024-12-09 12:00:24.458574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.524 [2024-12-09 12:00:24.458582] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.524 [2024-12-09 12:00:24.458592] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.524 [2024-12-09 12:00:24.468764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.524 qpair failed and we were unable to recover it. 00:21:16.524 [2024-12-09 12:00:24.478541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.524 [2024-12-09 12:00:24.478579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.524 [2024-12-09 12:00:24.478595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.524 [2024-12-09 12:00:24.478602] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.524 [2024-12-09 12:00:24.478609] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.525 [2024-12-09 12:00:24.489051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.525 qpair failed and we were unable to recover it. 00:21:16.525 [2024-12-09 12:00:24.498578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.525 [2024-12-09 12:00:24.498621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.525 [2024-12-09 12:00:24.498637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.525 [2024-12-09 12:00:24.498644] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.525 [2024-12-09 12:00:24.498651] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.525 [2024-12-09 12:00:24.509052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.525 qpair failed and we were unable to recover it. 00:21:16.525 [2024-12-09 12:00:24.518566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.525 [2024-12-09 12:00:24.518600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.525 [2024-12-09 12:00:24.518616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.525 [2024-12-09 12:00:24.518623] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.525 [2024-12-09 12:00:24.518630] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.525 [2024-12-09 12:00:24.529024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.525 qpair failed and we were unable to recover it. 00:21:16.525 [2024-12-09 12:00:24.538641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.525 [2024-12-09 12:00:24.538682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.525 [2024-12-09 12:00:24.538697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.525 [2024-12-09 12:00:24.538704] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.525 [2024-12-09 12:00:24.538711] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.525 [2024-12-09 12:00:24.548984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.525 qpair failed and we were unable to recover it. 00:21:16.525 [2024-12-09 12:00:24.558822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.525 [2024-12-09 12:00:24.558871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.525 [2024-12-09 12:00:24.558887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.525 [2024-12-09 12:00:24.558894] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.525 [2024-12-09 12:00:24.558900] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.525 [2024-12-09 12:00:24.569139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.525 qpair failed and we were unable to recover it. 00:21:16.784 [2024-12-09 12:00:24.579511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.784 [2024-12-09 12:00:24.579554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.784 [2024-12-09 12:00:24.579570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.784 [2024-12-09 12:00:24.579577] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.784 [2024-12-09 12:00:24.579583] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.784 [2024-12-09 12:00:24.589265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.784 qpair failed and we were unable to recover it. 00:21:16.784 [2024-12-09 12:00:24.599013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.784 [2024-12-09 12:00:24.599052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.784 [2024-12-09 12:00:24.599067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.784 [2024-12-09 12:00:24.599075] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.784 [2024-12-09 12:00:24.599081] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.784 [2024-12-09 12:00:24.609284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.784 qpair failed and we were unable to recover it. 00:21:16.784 [2024-12-09 12:00:24.618971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.784 [2024-12-09 12:00:24.619011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.784 [2024-12-09 12:00:24.619026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.784 [2024-12-09 12:00:24.619034] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.784 [2024-12-09 12:00:24.619041] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.784 [2024-12-09 12:00:24.629413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.784 qpair failed and we were unable to recover it. 00:21:16.784 [2024-12-09 12:00:24.638939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.784 [2024-12-09 12:00:24.638980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.784 [2024-12-09 12:00:24.638998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.784 [2024-12-09 12:00:24.639006] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.784 [2024-12-09 12:00:24.639012] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.784 [2024-12-09 12:00:24.649300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.784 qpair failed and we were unable to recover it. 00:21:16.784 [2024-12-09 12:00:24.659025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.784 [2024-12-09 12:00:24.659069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.784 [2024-12-09 12:00:24.659085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.784 [2024-12-09 12:00:24.659092] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.784 [2024-12-09 12:00:24.659099] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.784 [2024-12-09 12:00:24.669931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.784 qpair failed and we were unable to recover it. 00:21:16.784 [2024-12-09 12:00:24.679021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.785 [2024-12-09 12:00:24.679062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.785 [2024-12-09 12:00:24.679078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.785 [2024-12-09 12:00:24.679085] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.785 [2024-12-09 12:00:24.679092] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.785 [2024-12-09 12:00:24.689453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.785 qpair failed and we were unable to recover it. 00:21:16.785 [2024-12-09 12:00:24.699157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.785 [2024-12-09 12:00:24.699197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.785 [2024-12-09 12:00:24.699212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.785 [2024-12-09 12:00:24.699219] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.785 [2024-12-09 12:00:24.699226] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.785 [2024-12-09 12:00:24.709520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.785 qpair failed and we were unable to recover it. 00:21:16.785 [2024-12-09 12:00:24.719135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.785 [2024-12-09 12:00:24.719177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.785 [2024-12-09 12:00:24.719192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.785 [2024-12-09 12:00:24.719205] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.785 [2024-12-09 12:00:24.719212] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.785 [2024-12-09 12:00:24.729678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.785 qpair failed and we were unable to recover it. 00:21:16.785 [2024-12-09 12:00:24.739354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.785 [2024-12-09 12:00:24.739393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.785 [2024-12-09 12:00:24.739408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.785 [2024-12-09 12:00:24.739416] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.785 [2024-12-09 12:00:24.739422] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.785 [2024-12-09 12:00:24.749377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.785 qpair failed and we were unable to recover it. 00:21:16.785 [2024-12-09 12:00:24.759126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.785 [2024-12-09 12:00:24.759164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.785 [2024-12-09 12:00:24.759180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.785 [2024-12-09 12:00:24.759188] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.785 [2024-12-09 12:00:24.759194] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.785 [2024-12-09 12:00:24.769755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.785 qpair failed and we were unable to recover it. 00:21:16.785 [2024-12-09 12:00:24.779342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.785 [2024-12-09 12:00:24.779385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.785 [2024-12-09 12:00:24.779400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.785 [2024-12-09 12:00:24.779408] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.785 [2024-12-09 12:00:24.779414] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.785 [2024-12-09 12:00:24.789771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.785 qpair failed and we were unable to recover it. 00:21:16.785 [2024-12-09 12:00:24.799369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.785 [2024-12-09 12:00:24.799410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.785 [2024-12-09 12:00:24.799425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.785 [2024-12-09 12:00:24.799433] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.785 [2024-12-09 12:00:24.799439] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.785 [2024-12-09 12:00:24.809828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.785 qpair failed and we were unable to recover it. 00:21:16.785 [2024-12-09 12:00:24.819482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:16.785 [2024-12-09 12:00:24.819524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:16.785 [2024-12-09 12:00:24.819539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:16.785 [2024-12-09 12:00:24.819546] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:16.785 [2024-12-09 12:00:24.819553] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:16.785 [2024-12-09 12:00:24.829963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:16.785 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:24.839533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:24.839577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:24.839591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:24.839599] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:24.839605] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:24.849857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:24.859540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:24.859583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:24.859598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:24.859605] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:24.859612] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:24.869900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:24.879608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:24.879648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:24.879663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:24.879669] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:24.879676] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:24.889947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:24.899702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:24.899743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:24.899758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:24.899765] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:24.899772] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:24.910125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:24.919718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:24.919758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:24.919773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:24.919781] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:24.919787] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:24.930127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:24.939775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:24.939826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:24.939842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:24.939849] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:24.939857] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:24.950307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:24.959751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:24.959796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:24.959823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:24.959831] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:24.959839] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:24.970345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:24.979845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:24.979883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:24.979902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:24.979909] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:24.979915] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:24.990276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:24.999783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:24.999834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:24.999850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:24.999857] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:24.999864] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:25.010407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:25.020023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:25.020067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:25.020082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:25.020089] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:25.020096] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:25.030393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:25.040171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:25.040209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:25.040225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:25.040232] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:25.040238] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:25.050485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:25.060163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:25.060203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:25.060218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:25.060229] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:25.060235] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.044 [2024-12-09 12:00:25.070729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.044 qpair failed and we were unable to recover it. 00:21:17.044 [2024-12-09 12:00:25.080174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.044 [2024-12-09 12:00:25.080210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.044 [2024-12-09 12:00:25.080225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.044 [2024-12-09 12:00:25.080232] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.044 [2024-12-09 12:00:25.080239] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.045 [2024-12-09 12:00:25.090720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.045 qpair failed and we were unable to recover it. 00:21:17.303 [2024-12-09 12:00:25.100313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.303 [2024-12-09 12:00:25.100353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.303 [2024-12-09 12:00:25.100368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.303 [2024-12-09 12:00:25.100375] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.303 [2024-12-09 12:00:25.100381] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.303 [2024-12-09 12:00:25.110639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.303 qpair failed and we were unable to recover it. 00:21:17.303 [2024-12-09 12:00:25.120259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.303 [2024-12-09 12:00:25.120305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.303 [2024-12-09 12:00:25.120320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.303 [2024-12-09 12:00:25.120327] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.303 [2024-12-09 12:00:25.120334] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.303 [2024-12-09 12:00:25.130589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.303 qpair failed and we were unable to recover it. 00:21:17.303 [2024-12-09 12:00:25.140431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.303 [2024-12-09 12:00:25.140478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.303 [2024-12-09 12:00:25.140493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.303 [2024-12-09 12:00:25.140500] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.303 [2024-12-09 12:00:25.140507] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.303 [2024-12-09 12:00:25.150913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.303 qpair failed and we were unable to recover it. 00:21:17.303 [2024-12-09 12:00:25.160502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.303 [2024-12-09 12:00:25.160541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.303 [2024-12-09 12:00:25.160556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.303 [2024-12-09 12:00:25.160563] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.303 [2024-12-09 12:00:25.160570] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.303 [2024-12-09 12:00:25.170811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.303 qpair failed and we were unable to recover it. 00:21:17.303 [2024-12-09 12:00:25.180445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.303 [2024-12-09 12:00:25.180486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.303 [2024-12-09 12:00:25.180501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.303 [2024-12-09 12:00:25.180508] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.303 [2024-12-09 12:00:25.180515] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.303 [2024-12-09 12:00:25.190788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.303 qpair failed and we were unable to recover it. 00:21:17.303 [2024-12-09 12:00:25.200483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.303 [2024-12-09 12:00:25.200530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.303 [2024-12-09 12:00:25.200546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.303 [2024-12-09 12:00:25.200553] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.303 [2024-12-09 12:00:25.200560] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.303 [2024-12-09 12:00:25.210761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.303 qpair failed and we were unable to recover it. 00:21:17.303 [2024-12-09 12:00:25.220379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.303 [2024-12-09 12:00:25.220426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.303 [2024-12-09 12:00:25.220441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.303 [2024-12-09 12:00:25.220449] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.303 [2024-12-09 12:00:25.220455] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.303 [2024-12-09 12:00:25.230887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.303 qpair failed and we were unable to recover it. 00:21:17.303 [2024-12-09 12:00:25.240520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.304 [2024-12-09 12:00:25.240568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.304 [2024-12-09 12:00:25.240583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.304 [2024-12-09 12:00:25.240590] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.304 [2024-12-09 12:00:25.240596] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.304 [2024-12-09 12:00:25.250814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.304 qpair failed and we were unable to recover it. 00:21:17.304 [2024-12-09 12:00:25.260497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.304 [2024-12-09 12:00:25.260538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.304 [2024-12-09 12:00:25.260553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.304 [2024-12-09 12:00:25.260560] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.304 [2024-12-09 12:00:25.260567] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.304 [2024-12-09 12:00:25.270961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.304 qpair failed and we were unable to recover it. 00:21:17.304 [2024-12-09 12:00:25.280586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.304 [2024-12-09 12:00:25.280624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.304 [2024-12-09 12:00:25.280639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.304 [2024-12-09 12:00:25.280647] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.304 [2024-12-09 12:00:25.280653] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.304 [2024-12-09 12:00:25.290822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.304 qpair failed and we were unable to recover it. 00:21:17.304 [2024-12-09 12:00:25.300651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.304 [2024-12-09 12:00:25.300695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.304 [2024-12-09 12:00:25.300710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.304 [2024-12-09 12:00:25.300717] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.304 [2024-12-09 12:00:25.300724] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.304 [2024-12-09 12:00:25.311221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.304 qpair failed and we were unable to recover it. 00:21:17.304 [2024-12-09 12:00:25.320663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.304 [2024-12-09 12:00:25.320697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.304 [2024-12-09 12:00:25.320716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.304 [2024-12-09 12:00:25.320724] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.304 [2024-12-09 12:00:25.320730] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.304 [2024-12-09 12:00:25.331011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.304 qpair failed and we were unable to recover it. 00:21:17.304 [2024-12-09 12:00:25.340720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.304 [2024-12-09 12:00:25.340764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.304 [2024-12-09 12:00:25.340780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.304 [2024-12-09 12:00:25.340787] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.304 [2024-12-09 12:00:25.340794] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.304 [2024-12-09 12:00:25.351186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.304 qpair failed and we were unable to recover it. 00:21:17.563 [2024-12-09 12:00:25.360822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.563 [2024-12-09 12:00:25.360867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.563 [2024-12-09 12:00:25.360882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.563 [2024-12-09 12:00:25.360889] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.563 [2024-12-09 12:00:25.360895] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.563 [2024-12-09 12:00:25.371255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.563 qpair failed and we were unable to recover it. 00:21:17.563 [2024-12-09 12:00:25.380887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.563 [2024-12-09 12:00:25.380929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.563 [2024-12-09 12:00:25.380944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.563 [2024-12-09 12:00:25.380951] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.563 [2024-12-09 12:00:25.380957] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.563 [2024-12-09 12:00:25.391297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.563 qpair failed and we were unable to recover it. 00:21:17.563 [2024-12-09 12:00:25.400966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.563 [2024-12-09 12:00:25.401002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.563 [2024-12-09 12:00:25.401018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.563 [2024-12-09 12:00:25.401025] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.563 [2024-12-09 12:00:25.401035] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.563 [2024-12-09 12:00:25.411196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.563 qpair failed and we were unable to recover it. 00:21:17.563 [2024-12-09 12:00:25.421112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.563 [2024-12-09 12:00:25.421156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.563 [2024-12-09 12:00:25.421172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.563 [2024-12-09 12:00:25.421179] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.563 [2024-12-09 12:00:25.421186] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.563 [2024-12-09 12:00:25.431237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.563 qpair failed and we were unable to recover it. 00:21:17.563 [2024-12-09 12:00:25.441037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.563 [2024-12-09 12:00:25.441084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.563 [2024-12-09 12:00:25.441099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.563 [2024-12-09 12:00:25.441107] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.563 [2024-12-09 12:00:25.441113] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.563 [2024-12-09 12:00:25.451379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.563 qpair failed and we were unable to recover it. 00:21:17.563 [2024-12-09 12:00:25.461151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.563 [2024-12-09 12:00:25.461189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.563 [2024-12-09 12:00:25.461204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.563 [2024-12-09 12:00:25.461212] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.563 [2024-12-09 12:00:25.461218] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.563 [2024-12-09 12:00:25.471536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.563 qpair failed and we were unable to recover it. 00:21:17.563 [2024-12-09 12:00:25.481168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.563 [2024-12-09 12:00:25.481206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.564 [2024-12-09 12:00:25.481221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.564 [2024-12-09 12:00:25.481228] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.564 [2024-12-09 12:00:25.481235] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.564 [2024-12-09 12:00:25.491476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.564 qpair failed and we were unable to recover it. 00:21:17.564 [2024-12-09 12:00:25.501311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.564 [2024-12-09 12:00:25.501354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.564 [2024-12-09 12:00:25.501370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.564 [2024-12-09 12:00:25.501377] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.564 [2024-12-09 12:00:25.501383] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.564 [2024-12-09 12:00:25.511841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.564 qpair failed and we were unable to recover it. 00:21:17.564 [2024-12-09 12:00:25.521301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.564 [2024-12-09 12:00:25.521339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.564 [2024-12-09 12:00:25.521355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.564 [2024-12-09 12:00:25.521362] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.564 [2024-12-09 12:00:25.521369] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.564 [2024-12-09 12:00:25.531618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.564 qpair failed and we were unable to recover it. 00:21:17.564 [2024-12-09 12:00:25.541355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.564 [2024-12-09 12:00:25.541401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.564 [2024-12-09 12:00:25.541416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.564 [2024-12-09 12:00:25.541424] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.564 [2024-12-09 12:00:25.541430] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.564 [2024-12-09 12:00:25.551635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.564 qpair failed and we were unable to recover it. 00:21:17.564 [2024-12-09 12:00:25.561514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.564 [2024-12-09 12:00:25.561550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.564 [2024-12-09 12:00:25.561565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.564 [2024-12-09 12:00:25.561572] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.564 [2024-12-09 12:00:25.561578] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.564 [2024-12-09 12:00:25.571786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.564 qpair failed and we were unable to recover it. 00:21:17.564 [2024-12-09 12:00:25.581522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.564 [2024-12-09 12:00:25.581566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.564 [2024-12-09 12:00:25.581581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.564 [2024-12-09 12:00:25.581589] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.564 [2024-12-09 12:00:25.581595] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.564 [2024-12-09 12:00:25.591954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.564 qpair failed and we were unable to recover it. 00:21:17.564 [2024-12-09 12:00:25.601520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.564 [2024-12-09 12:00:25.601565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.564 [2024-12-09 12:00:25.601579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.564 [2024-12-09 12:00:25.601587] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.564 [2024-12-09 12:00:25.601593] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.564 [2024-12-09 12:00:25.611934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.564 qpair failed and we were unable to recover it. 00:21:17.823 [2024-12-09 12:00:25.621574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.823 [2024-12-09 12:00:25.621616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.823 [2024-12-09 12:00:25.621631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.823 [2024-12-09 12:00:25.621639] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.823 [2024-12-09 12:00:25.621645] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.823 [2024-12-09 12:00:25.631898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.823 qpair failed and we were unable to recover it. 00:21:17.823 [2024-12-09 12:00:25.641681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.823 [2024-12-09 12:00:25.641723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.823 [2024-12-09 12:00:25.641738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.823 [2024-12-09 12:00:25.641745] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.823 [2024-12-09 12:00:25.641751] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.823 [2024-12-09 12:00:25.651973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.823 qpair failed and we were unable to recover it. 00:21:17.823 [2024-12-09 12:00:25.661781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.823 [2024-12-09 12:00:25.661829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.823 [2024-12-09 12:00:25.661848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.823 [2024-12-09 12:00:25.661855] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.823 [2024-12-09 12:00:25.661861] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.823 [2024-12-09 12:00:25.672037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.823 qpair failed and we were unable to recover it. 00:21:17.823 [2024-12-09 12:00:25.681754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.823 [2024-12-09 12:00:25.681799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.823 [2024-12-09 12:00:25.681821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.823 [2024-12-09 12:00:25.681829] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.823 [2024-12-09 12:00:25.681835] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.823 [2024-12-09 12:00:25.692084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.823 qpair failed and we were unable to recover it. 00:21:17.823 [2024-12-09 12:00:25.701771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.823 [2024-12-09 12:00:25.701829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.823 [2024-12-09 12:00:25.701844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.823 [2024-12-09 12:00:25.701851] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.823 [2024-12-09 12:00:25.701858] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.824 [2024-12-09 12:00:25.712337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.824 qpair failed and we were unable to recover it. 00:21:17.824 [2024-12-09 12:00:25.722005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.824 [2024-12-09 12:00:25.722044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.824 [2024-12-09 12:00:25.722058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.824 [2024-12-09 12:00:25.722066] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.824 [2024-12-09 12:00:25.722072] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.824 [2024-12-09 12:00:25.732252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.824 qpair failed and we were unable to recover it. 00:21:17.824 [2024-12-09 12:00:25.741994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.824 [2024-12-09 12:00:25.742034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.824 [2024-12-09 12:00:25.742049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.824 [2024-12-09 12:00:25.742056] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.824 [2024-12-09 12:00:25.742065] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.824 [2024-12-09 12:00:25.752247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.824 qpair failed and we were unable to recover it. 00:21:17.824 [2024-12-09 12:00:25.762107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.824 [2024-12-09 12:00:25.762154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.824 [2024-12-09 12:00:25.762170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.824 [2024-12-09 12:00:25.762177] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.824 [2024-12-09 12:00:25.762183] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.824 [2024-12-09 12:00:25.772267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.824 qpair failed and we were unable to recover it. 00:21:17.824 [2024-12-09 12:00:25.782142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.824 [2024-12-09 12:00:25.782185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.824 [2024-12-09 12:00:25.782200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.824 [2024-12-09 12:00:25.782207] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.824 [2024-12-09 12:00:25.782214] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.824 [2024-12-09 12:00:25.792396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.824 qpair failed and we were unable to recover it. 00:21:17.824 [2024-12-09 12:00:25.802227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.824 [2024-12-09 12:00:25.802266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.824 [2024-12-09 12:00:25.802281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.824 [2024-12-09 12:00:25.802288] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.824 [2024-12-09 12:00:25.802295] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.824 [2024-12-09 12:00:25.812363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.824 qpair failed and we were unable to recover it. 00:21:17.824 [2024-12-09 12:00:25.822214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.824 [2024-12-09 12:00:25.822254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.824 [2024-12-09 12:00:25.822270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.824 [2024-12-09 12:00:25.822278] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.824 [2024-12-09 12:00:25.822284] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.824 [2024-12-09 12:00:25.832529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.824 qpair failed and we were unable to recover it. 00:21:17.824 [2024-12-09 12:00:25.842187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.824 [2024-12-09 12:00:25.842229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.824 [2024-12-09 12:00:25.842244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.824 [2024-12-09 12:00:25.842252] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.824 [2024-12-09 12:00:25.842258] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.824 [2024-12-09 12:00:25.852549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.824 qpair failed and we were unable to recover it. 00:21:17.824 [2024-12-09 12:00:25.862358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:17.824 [2024-12-09 12:00:25.862397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:17.824 [2024-12-09 12:00:25.862412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:17.824 [2024-12-09 12:00:25.862419] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:17.824 [2024-12-09 12:00:25.862426] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:17.824 [2024-12-09 12:00:25.872552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:17.824 qpair failed and we were unable to recover it. 00:21:18.083 [2024-12-09 12:00:25.882393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:25.882436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:25.882451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:25.882458] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:25.882464] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:25.892635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:25.902535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:25.902577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:25.902592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:25.902599] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:25.902606] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:25.912943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:25.922568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:25.922618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:25.922633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:25.922640] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:25.922647] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:25.932704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:25.942534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:25.942578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:25.942594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:25.942601] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:25.942607] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:25.953333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:25.962767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:25.962812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:25.962827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:25.962834] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:25.962840] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:25.972954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:25.982751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:25.982793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:25.982813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:25.982821] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:25.982827] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:25.993066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:26.002721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:26.002761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:26.002779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:26.002787] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:26.002793] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:26.013159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:26.022856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:26.022901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:26.022916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:26.022923] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:26.022930] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:26.033358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:26.042973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:26.043016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:26.043031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:26.043039] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:26.043045] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:26.053466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:26.063168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:26.063210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:26.063224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:26.063231] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:26.063238] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:26.073422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:26.083274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:26.083315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:26.083330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:26.083338] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:26.083347] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:26.093583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:26.103240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:26.103283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:26.103299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:26.103306] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:26.103313] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:26.113611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.084 [2024-12-09 12:00:26.123293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.084 [2024-12-09 12:00:26.123330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.084 [2024-12-09 12:00:26.123345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.084 [2024-12-09 12:00:26.123353] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.084 [2024-12-09 12:00:26.123360] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.084 [2024-12-09 12:00:26.134003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.084 qpair failed and we were unable to recover it. 00:21:18.344 [2024-12-09 12:00:26.143363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.344 [2024-12-09 12:00:26.143409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.344 [2024-12-09 12:00:26.143423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.344 [2024-12-09 12:00:26.143431] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.344 [2024-12-09 12:00:26.143437] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.344 [2024-12-09 12:00:26.153806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.344 qpair failed and we were unable to recover it. 00:21:18.344 [2024-12-09 12:00:26.163419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.344 [2024-12-09 12:00:26.163462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.344 [2024-12-09 12:00:26.163476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.344 [2024-12-09 12:00:26.163483] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.344 [2024-12-09 12:00:26.163490] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.344 [2024-12-09 12:00:26.173871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.344 qpair failed and we were unable to recover it. 00:21:18.344 [2024-12-09 12:00:26.183419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.344 [2024-12-09 12:00:26.183464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.344 [2024-12-09 12:00:26.183479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.344 [2024-12-09 12:00:26.183486] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.344 [2024-12-09 12:00:26.183492] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.344 [2024-12-09 12:00:26.193822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.344 qpair failed and we were unable to recover it. 00:21:18.344 [2024-12-09 12:00:26.203491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.344 [2024-12-09 12:00:26.203527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.344 [2024-12-09 12:00:26.203542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.344 [2024-12-09 12:00:26.203550] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.344 [2024-12-09 12:00:26.203556] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.344 [2024-12-09 12:00:26.213840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.344 qpair failed and we were unable to recover it. 00:21:18.344 [2024-12-09 12:00:26.223536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.344 [2024-12-09 12:00:26.223577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.344 [2024-12-09 12:00:26.223592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.344 [2024-12-09 12:00:26.223599] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.344 [2024-12-09 12:00:26.223605] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.344 [2024-12-09 12:00:26.233910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.344 qpair failed and we were unable to recover it. 00:21:18.344 [2024-12-09 12:00:26.243572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.344 [2024-12-09 12:00:26.243614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.344 [2024-12-09 12:00:26.243630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.344 [2024-12-09 12:00:26.243637] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.344 [2024-12-09 12:00:26.243644] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.344 [2024-12-09 12:00:26.253952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.344 qpair failed and we were unable to recover it. 00:21:18.344 [2024-12-09 12:00:26.263702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.344 [2024-12-09 12:00:26.263749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.344 [2024-12-09 12:00:26.263764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.344 [2024-12-09 12:00:26.263771] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.344 [2024-12-09 12:00:26.263778] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.345 [2024-12-09 12:00:26.274179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.345 qpair failed and we were unable to recover it. 00:21:18.345 [2024-12-09 12:00:26.283780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.345 [2024-12-09 12:00:26.283821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.345 [2024-12-09 12:00:26.283836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.345 [2024-12-09 12:00:26.283843] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.345 [2024-12-09 12:00:26.283850] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.345 [2024-12-09 12:00:26.294023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.345 qpair failed and we were unable to recover it. 00:21:18.345 [2024-12-09 12:00:26.303774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.345 [2024-12-09 12:00:26.303825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.345 [2024-12-09 12:00:26.303840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.345 [2024-12-09 12:00:26.303848] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.345 [2024-12-09 12:00:26.303854] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.345 [2024-12-09 12:00:26.314244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.345 qpair failed and we were unable to recover it. 00:21:18.345 [2024-12-09 12:00:26.323778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.345 [2024-12-09 12:00:26.323827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.345 [2024-12-09 12:00:26.323842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.345 [2024-12-09 12:00:26.323849] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.345 [2024-12-09 12:00:26.323856] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.345 [2024-12-09 12:00:26.334176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.345 qpair failed and we were unable to recover it. 00:21:18.345 [2024-12-09 12:00:26.343844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.345 [2024-12-09 12:00:26.343889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.345 [2024-12-09 12:00:26.343904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.345 [2024-12-09 12:00:26.343915] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.345 [2024-12-09 12:00:26.343922] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.345 [2024-12-09 12:00:26.354134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.345 qpair failed and we were unable to recover it. 00:21:18.345 [2024-12-09 12:00:26.363962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.345 [2024-12-09 12:00:26.363998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.345 [2024-12-09 12:00:26.364013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.345 [2024-12-09 12:00:26.364020] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.345 [2024-12-09 12:00:26.364026] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.345 [2024-12-09 12:00:26.374526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.345 qpair failed and we were unable to recover it. 00:21:18.345 [2024-12-09 12:00:26.384049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.345 [2024-12-09 12:00:26.384092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.345 [2024-12-09 12:00:26.384108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.345 [2024-12-09 12:00:26.384115] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.345 [2024-12-09 12:00:26.384122] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.345 [2024-12-09 12:00:26.394483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.345 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.404075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.404117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.404132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.404139] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.404145] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.414440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.424145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.424190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.424205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.424212] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.424219] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.434514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.444189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.444228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.444244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.444251] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.444258] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.454600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.464309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.464351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.464366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.464373] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.464380] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.474728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.484331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.484378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.484392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.484400] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.484406] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.494770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.504379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.504416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.504431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.504439] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.504445] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.514744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.524409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.524446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.524462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.524469] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.524476] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.534878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.544430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.544475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.544490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.544498] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.544504] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.554684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.564446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.564489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.564504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.564511] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.564517] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.574995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.584609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.584651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.584665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.584673] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.584680] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.595568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.604592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.604636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.607 [2024-12-09 12:00:26.604654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.607 [2024-12-09 12:00:26.604662] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.607 [2024-12-09 12:00:26.604668] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.607 [2024-12-09 12:00:26.614840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.607 qpair failed and we were unable to recover it. 00:21:18.607 [2024-12-09 12:00:26.624678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.607 [2024-12-09 12:00:26.624721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.608 [2024-12-09 12:00:26.624737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.608 [2024-12-09 12:00:26.624744] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.608 [2024-12-09 12:00:26.624750] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.608 [2024-12-09 12:00:26.635198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.608 qpair failed and we were unable to recover it. 00:21:18.608 [2024-12-09 12:00:26.644669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.608 [2024-12-09 12:00:26.644709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.608 [2024-12-09 12:00:26.644723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.608 [2024-12-09 12:00:26.644730] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.608 [2024-12-09 12:00:26.644737] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.608 [2024-12-09 12:00:26.655053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.608 qpair failed and we were unable to recover it. 00:21:18.867 [2024-12-09 12:00:26.664788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.867 [2024-12-09 12:00:26.664837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.867 [2024-12-09 12:00:26.664852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.867 [2024-12-09 12:00:26.664859] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.867 [2024-12-09 12:00:26.664866] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.867 [2024-12-09 12:00:26.675124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.867 qpair failed and we were unable to recover it. 00:21:18.867 [2024-12-09 12:00:26.684797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.867 [2024-12-09 12:00:26.684845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.867 [2024-12-09 12:00:26.684860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.867 [2024-12-09 12:00:26.684872] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.867 [2024-12-09 12:00:26.684879] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.867 [2024-12-09 12:00:26.695297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.867 qpair failed and we were unable to recover it. 00:21:18.867 [2024-12-09 12:00:26.704877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.867 [2024-12-09 12:00:26.704919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.867 [2024-12-09 12:00:26.704933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.867 [2024-12-09 12:00:26.704940] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.867 [2024-12-09 12:00:26.704947] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.867 [2024-12-09 12:00:26.715360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.867 qpair failed and we were unable to recover it. 00:21:18.867 [2024-12-09 12:00:26.724814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.867 [2024-12-09 12:00:26.724857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.867 [2024-12-09 12:00:26.724872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.867 [2024-12-09 12:00:26.724880] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.867 [2024-12-09 12:00:26.724887] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.867 [2024-12-09 12:00:26.735373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.867 qpair failed and we were unable to recover it. 00:21:18.867 [2024-12-09 12:00:26.745175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.867 [2024-12-09 12:00:26.745223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.867 [2024-12-09 12:00:26.745238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.867 [2024-12-09 12:00:26.745245] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.867 [2024-12-09 12:00:26.745252] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.867 [2024-12-09 12:00:26.755354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.867 qpair failed and we were unable to recover it. 00:21:18.867 [2024-12-09 12:00:26.765179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.867 [2024-12-09 12:00:26.765218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.867 [2024-12-09 12:00:26.765233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.867 [2024-12-09 12:00:26.765240] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.867 [2024-12-09 12:00:26.765247] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.867 [2024-12-09 12:00:26.775379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.868 qpair failed and we were unable to recover it. 00:21:18.868 [2024-12-09 12:00:26.785231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.868 [2024-12-09 12:00:26.785274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.868 [2024-12-09 12:00:26.785290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.868 [2024-12-09 12:00:26.785297] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.868 [2024-12-09 12:00:26.785303] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.868 [2024-12-09 12:00:26.795613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.868 qpair failed and we were unable to recover it. 00:21:18.868 [2024-12-09 12:00:26.805173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.868 [2024-12-09 12:00:26.805217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.868 [2024-12-09 12:00:26.805231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.868 [2024-12-09 12:00:26.805239] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.868 [2024-12-09 12:00:26.805245] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.868 [2024-12-09 12:00:26.815436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.868 qpair failed and we were unable to recover it. 00:21:18.868 [2024-12-09 12:00:26.825308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.868 [2024-12-09 12:00:26.825349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.868 [2024-12-09 12:00:26.825364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.868 [2024-12-09 12:00:26.825371] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.868 [2024-12-09 12:00:26.825378] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.868 [2024-12-09 12:00:26.835713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.868 qpair failed and we were unable to recover it. 00:21:18.868 [2024-12-09 12:00:26.845297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.868 [2024-12-09 12:00:26.845336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.868 [2024-12-09 12:00:26.845351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.868 [2024-12-09 12:00:26.845359] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.868 [2024-12-09 12:00:26.845365] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.868 [2024-12-09 12:00:26.855607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.868 qpair failed and we were unable to recover it. 00:21:18.868 [2024-12-09 12:00:26.865376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.868 [2024-12-09 12:00:26.865420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.868 [2024-12-09 12:00:26.865435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.868 [2024-12-09 12:00:26.865443] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.868 [2024-12-09 12:00:26.865449] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.868 [2024-12-09 12:00:26.875758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.868 qpair failed and we were unable to recover it. 00:21:18.868 [2024-12-09 12:00:26.885412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.868 [2024-12-09 12:00:26.885452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.868 [2024-12-09 12:00:26.885467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.868 [2024-12-09 12:00:26.885475] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.868 [2024-12-09 12:00:26.885482] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.868 [2024-12-09 12:00:26.895610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.868 qpair failed and we were unable to recover it. 00:21:18.868 [2024-12-09 12:00:26.905434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:18.868 [2024-12-09 12:00:26.905471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:18.868 [2024-12-09 12:00:26.905487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:18.868 [2024-12-09 12:00:26.905495] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:18.868 [2024-12-09 12:00:26.905501] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:18.868 [2024-12-09 12:00:26.915952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.868 qpair failed and we were unable to recover it. 00:21:19.127 [2024-12-09 12:00:26.925532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:26.925567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:26.925582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:26.925589] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:26.925596] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:26.935897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:26.945610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:26.945652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:26.945670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:26.945678] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:26.945684] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:26.956007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:26.965643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:26.965686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:26.965701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:26.965709] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:26.965715] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:26.976005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:26.985715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:26.985752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:26.985767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:26.985774] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:26.985780] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:26.996087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:27.005866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:27.005913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:27.005928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:27.005936] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:27.005942] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:27.016295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:27.025917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:27.025961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:27.025976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:27.025987] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:27.025993] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:27.036186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:27.045922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:27.045966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:27.045981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:27.045989] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:27.045995] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:27.056152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:27.065924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:27.065966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:27.065981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:27.065988] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:27.065995] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:27.076460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:27.086036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:27.086072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:27.086087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:27.086094] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:27.086101] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:27.096367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:27.106023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:27.106063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:27.106078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:27.106085] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:27.106091] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:27.116449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:27.126174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:27.126219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:27.126233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:27.126241] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:27.126247] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:27.136403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:27.146147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:27.146188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:27.146202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:27.146210] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:27.146216] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:27.156454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.128 [2024-12-09 12:00:27.166338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.128 [2024-12-09 12:00:27.166375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.128 [2024-12-09 12:00:27.166390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.128 [2024-12-09 12:00:27.166397] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.128 [2024-12-09 12:00:27.166404] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.128 [2024-12-09 12:00:27.176478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.128 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.186193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.186235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.186250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.186257] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.186263] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.196709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.206274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.206313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.206328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.206335] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.206341] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.216609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.226439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.226485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.226500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.226507] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.226514] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.237052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.246409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.246456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.246471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.246479] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.246485] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.256739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.266496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.266539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.266554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.266562] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.266568] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.276771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.286457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.286505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.286523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.286530] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.286536] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.296953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.306584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.306627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.306642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.306649] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.306655] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.316913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.326675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.326712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.326727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.326735] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.326741] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.336857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.346628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.346669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.346684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.346691] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.346698] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.356981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.366672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.366709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.366724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.366731] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.366741] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.388 [2024-12-09 12:00:27.377113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.388 qpair failed and we were unable to recover it. 00:21:19.388 [2024-12-09 12:00:27.386874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.388 [2024-12-09 12:00:27.386910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.388 [2024-12-09 12:00:27.386925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.388 [2024-12-09 12:00:27.386933] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.388 [2024-12-09 12:00:27.386939] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.389 [2024-12-09 12:00:27.397134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.389 qpair failed and we were unable to recover it. 00:21:19.389 [2024-12-09 12:00:27.406814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.389 [2024-12-09 12:00:27.406857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.389 [2024-12-09 12:00:27.406872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.389 [2024-12-09 12:00:27.406879] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.389 [2024-12-09 12:00:27.406885] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.389 [2024-12-09 12:00:27.417202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.389 qpair failed and we were unable to recover it. 00:21:19.389 [2024-12-09 12:00:27.427145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.389 [2024-12-09 12:00:27.427185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.389 [2024-12-09 12:00:27.427201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.389 [2024-12-09 12:00:27.427208] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.389 [2024-12-09 12:00:27.427215] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.389 [2024-12-09 12:00:27.437218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.389 qpair failed and we were unable to recover it. 00:21:19.648 [2024-12-09 12:00:27.447000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.648 [2024-12-09 12:00:27.447039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.648 [2024-12-09 12:00:27.447054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.648 [2024-12-09 12:00:27.447062] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.648 [2024-12-09 12:00:27.447068] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.648 [2024-12-09 12:00:27.457260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.648 qpair failed and we were unable to recover it. 00:21:19.648 [2024-12-09 12:00:27.467177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.648 [2024-12-09 12:00:27.467221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.648 [2024-12-09 12:00:27.467236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.648 [2024-12-09 12:00:27.467243] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.648 [2024-12-09 12:00:27.467250] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.648 [2024-12-09 12:00:27.477287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.648 qpair failed and we were unable to recover it. 00:21:19.648 [2024-12-09 12:00:27.487059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.648 [2024-12-09 12:00:27.487100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.648 [2024-12-09 12:00:27.487115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.648 [2024-12-09 12:00:27.487122] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.648 [2024-12-09 12:00:27.487128] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.648 [2024-12-09 12:00:27.497310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.648 qpair failed and we were unable to recover it. 00:21:19.648 [2024-12-09 12:00:27.507152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.648 [2024-12-09 12:00:27.507193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.648 [2024-12-09 12:00:27.507208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.648 [2024-12-09 12:00:27.507215] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.648 [2024-12-09 12:00:27.507222] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.648 [2024-12-09 12:00:27.517513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.648 qpair failed and we were unable to recover it. 00:21:19.648 [2024-12-09 12:00:27.527272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.648 [2024-12-09 12:00:27.527315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.648 [2024-12-09 12:00:27.527330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.649 [2024-12-09 12:00:27.527337] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.649 [2024-12-09 12:00:27.527344] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.649 [2024-12-09 12:00:27.537450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.649 qpair failed and we were unable to recover it. 00:21:19.649 [2024-12-09 12:00:27.547347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.649 [2024-12-09 12:00:27.547387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.649 [2024-12-09 12:00:27.547402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.649 [2024-12-09 12:00:27.547409] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.649 [2024-12-09 12:00:27.547416] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.649 [2024-12-09 12:00:27.557585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.649 qpair failed and we were unable to recover it. 00:21:19.649 [2024-12-09 12:00:27.567448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.649 [2024-12-09 12:00:27.567487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.649 [2024-12-09 12:00:27.567502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.649 [2024-12-09 12:00:27.567509] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.649 [2024-12-09 12:00:27.567515] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.649 [2024-12-09 12:00:27.577527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.649 qpair failed and we were unable to recover it. 00:21:19.649 [2024-12-09 12:00:27.587436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.649 [2024-12-09 12:00:27.587476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.649 [2024-12-09 12:00:27.587491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.649 [2024-12-09 12:00:27.587498] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.649 [2024-12-09 12:00:27.587505] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.649 [2024-12-09 12:00:27.597661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.649 qpair failed and we were unable to recover it. 00:21:19.649 [2024-12-09 12:00:27.607440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.649 [2024-12-09 12:00:27.607482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.649 [2024-12-09 12:00:27.607504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.649 [2024-12-09 12:00:27.607511] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.649 [2024-12-09 12:00:27.607517] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.649 [2024-12-09 12:00:27.617790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.649 qpair failed and we were unable to recover it. 00:21:19.649 [2024-12-09 12:00:27.627466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.649 [2024-12-09 12:00:27.627508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.649 [2024-12-09 12:00:27.627527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.649 [2024-12-09 12:00:27.627534] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.649 [2024-12-09 12:00:27.627541] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.649 [2024-12-09 12:00:27.637863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.649 qpair failed and we were unable to recover it. 00:21:19.649 [2024-12-09 12:00:27.647624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.649 [2024-12-09 12:00:27.647661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.649 [2024-12-09 12:00:27.647676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.649 [2024-12-09 12:00:27.647683] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.649 [2024-12-09 12:00:27.647690] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.649 [2024-12-09 12:00:27.657931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.649 qpair failed and we were unable to recover it. 00:21:19.649 [2024-12-09 12:00:27.667676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.649 [2024-12-09 12:00:27.667717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.649 [2024-12-09 12:00:27.667732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.649 [2024-12-09 12:00:27.667739] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.649 [2024-12-09 12:00:27.667745] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.649 [2024-12-09 12:00:27.677861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.649 qpair failed and we were unable to recover it. 00:21:19.649 [2024-12-09 12:00:27.687692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.649 [2024-12-09 12:00:27.687733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.649 [2024-12-09 12:00:27.687748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.649 [2024-12-09 12:00:27.687755] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.649 [2024-12-09 12:00:27.687762] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.649 [2024-12-09 12:00:27.697988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.649 qpair failed and we were unable to recover it. 00:21:19.909 [2024-12-09 12:00:27.707773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.909 [2024-12-09 12:00:27.707813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.909 [2024-12-09 12:00:27.707829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.909 [2024-12-09 12:00:27.707837] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.909 [2024-12-09 12:00:27.707846] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.909 [2024-12-09 12:00:27.718128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.909 qpair failed and we were unable to recover it. 00:21:19.909 [2024-12-09 12:00:27.727966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.909 [2024-12-09 12:00:27.728001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.909 [2024-12-09 12:00:27.728017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.909 [2024-12-09 12:00:27.728024] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.909 [2024-12-09 12:00:27.728031] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.909 [2024-12-09 12:00:27.738084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.909 qpair failed and we were unable to recover it. 00:21:19.909 [2024-12-09 12:00:27.747948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.909 [2024-12-09 12:00:27.747989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.909 [2024-12-09 12:00:27.748004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.909 [2024-12-09 12:00:27.748012] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.909 [2024-12-09 12:00:27.748019] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.909 [2024-12-09 12:00:27.758153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.909 qpair failed and we were unable to recover it. 00:21:19.909 [2024-12-09 12:00:27.768084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.909 [2024-12-09 12:00:27.768125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.909 [2024-12-09 12:00:27.768140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.909 [2024-12-09 12:00:27.768147] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.909 [2024-12-09 12:00:27.768153] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.909 [2024-12-09 12:00:27.778136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.909 qpair failed and we were unable to recover it. 00:21:19.909 [2024-12-09 12:00:27.788153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.909 [2024-12-09 12:00:27.788192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.909 [2024-12-09 12:00:27.788208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.909 [2024-12-09 12:00:27.788215] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.909 [2024-12-09 12:00:27.788221] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.909 [2024-12-09 12:00:27.798106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.909 qpair failed and we were unable to recover it. 00:21:19.909 [2024-12-09 12:00:27.808136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.909 [2024-12-09 12:00:27.808171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.909 [2024-12-09 12:00:27.808187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.909 [2024-12-09 12:00:27.808194] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.909 [2024-12-09 12:00:27.808200] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.910 [2024-12-09 12:00:27.818352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.910 qpair failed and we were unable to recover it. 00:21:19.910 [2024-12-09 12:00:27.828091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.910 [2024-12-09 12:00:27.828135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.910 [2024-12-09 12:00:27.828150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.910 [2024-12-09 12:00:27.828157] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.910 [2024-12-09 12:00:27.828164] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.910 [2024-12-09 12:00:27.838495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.910 qpair failed and we were unable to recover it. 00:21:19.910 [2024-12-09 12:00:27.848162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.910 [2024-12-09 12:00:27.848209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.910 [2024-12-09 12:00:27.848224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.910 [2024-12-09 12:00:27.848231] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.910 [2024-12-09 12:00:27.848237] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.910 [2024-12-09 12:00:27.858567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.910 qpair failed and we were unable to recover it. 00:21:19.910 [2024-12-09 12:00:27.868355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.910 [2024-12-09 12:00:27.868394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.910 [2024-12-09 12:00:27.868417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.910 [2024-12-09 12:00:27.868424] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.910 [2024-12-09 12:00:27.868431] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.910 [2024-12-09 12:00:27.878904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.910 qpair failed and we were unable to recover it. 00:21:19.910 [2024-12-09 12:00:27.888388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.910 [2024-12-09 12:00:27.888432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.910 [2024-12-09 12:00:27.888447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.910 [2024-12-09 12:00:27.888454] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.910 [2024-12-09 12:00:27.888461] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.910 [2024-12-09 12:00:27.898484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.910 qpair failed and we were unable to recover it. 00:21:19.910 [2024-12-09 12:00:27.908405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.910 [2024-12-09 12:00:27.908447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.910 [2024-12-09 12:00:27.908462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.910 [2024-12-09 12:00:27.908470] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.910 [2024-12-09 12:00:27.908476] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.910 [2024-12-09 12:00:27.918708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.910 qpair failed and we were unable to recover it. 00:21:19.910 [2024-12-09 12:00:27.928515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.910 [2024-12-09 12:00:27.928561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.910 [2024-12-09 12:00:27.928576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.910 [2024-12-09 12:00:27.928583] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.910 [2024-12-09 12:00:27.928590] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.910 [2024-12-09 12:00:27.938816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.910 qpair failed and we were unable to recover it. 00:21:19.910 [2024-12-09 12:00:27.948508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:19.910 [2024-12-09 12:00:27.948557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:19.910 [2024-12-09 12:00:27.948572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:19.910 [2024-12-09 12:00:27.948581] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:19.910 [2024-12-09 12:00:27.948587] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:19.910 [2024-12-09 12:00:27.958770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:19.910 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:27.968519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:27.968563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:27.968578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:27.968589] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:27.968595] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:27.978787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:27.988536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:27.988578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:27.988592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:27.988600] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:27.988606] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:27.998959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.008748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:28.008789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:28.008803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:28.008815] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:28.008822] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:28.019076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.028751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:28.028791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:28.028806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:28.028819] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:28.028826] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:28.039026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.048866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:28.048904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:28.048919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:28.048926] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:28.048937] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:28.059285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.068819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:28.068862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:28.068877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:28.068885] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:28.068891] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:28.079133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.088914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:28.088959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:28.088974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:28.088981] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:28.088988] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:28.099399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.108986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:28.109027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:28.109042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:28.109049] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:28.109056] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:28.119395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.128991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:28.129030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:28.129046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:28.129053] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:28.129059] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:28.139427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.149117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:28.149161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:28.149176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:28.149184] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:28.149190] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:28.159483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.169206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.170 [2024-12-09 12:00:28.169254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.170 [2024-12-09 12:00:28.169269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.170 [2024-12-09 12:00:28.169276] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.170 [2024-12-09 12:00:28.169283] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.170 [2024-12-09 12:00:28.179517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.170 qpair failed and we were unable to recover it. 00:21:20.170 [2024-12-09 12:00:28.189247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.171 [2024-12-09 12:00:28.189285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.171 [2024-12-09 12:00:28.189300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.171 [2024-12-09 12:00:28.189308] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.171 [2024-12-09 12:00:28.189314] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.171 [2024-12-09 12:00:28.199646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.171 qpair failed and we were unable to recover it. 00:21:20.171 [2024-12-09 12:00:28.209255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.171 [2024-12-09 12:00:28.209300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.171 [2024-12-09 12:00:28.209315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.171 [2024-12-09 12:00:28.209322] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.171 [2024-12-09 12:00:28.209328] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.171 [2024-12-09 12:00:28.219598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.171 qpair failed and we were unable to recover it. 00:21:20.430 [2024-12-09 12:00:28.229312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.430 [2024-12-09 12:00:28.229354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.430 [2024-12-09 12:00:28.229372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.430 [2024-12-09 12:00:28.229379] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.430 [2024-12-09 12:00:28.229386] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.430 [2024-12-09 12:00:28.239817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.430 qpair failed and we were unable to recover it. 00:21:20.430 [2024-12-09 12:00:28.249414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.430 [2024-12-09 12:00:28.249457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.430 [2024-12-09 12:00:28.249472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.430 [2024-12-09 12:00:28.249479] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.430 [2024-12-09 12:00:28.249485] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.430 [2024-12-09 12:00:28.259698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.430 qpair failed and we were unable to recover it. 00:21:20.430 [2024-12-09 12:00:28.269443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.430 [2024-12-09 12:00:28.269485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.430 [2024-12-09 12:00:28.269501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.430 [2024-12-09 12:00:28.269508] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.430 [2024-12-09 12:00:28.269515] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.430 [2024-12-09 12:00:28.279874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.430 qpair failed and we were unable to recover it. 00:21:20.430 [2024-12-09 12:00:28.289488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.430 [2024-12-09 12:00:28.289526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.430 [2024-12-09 12:00:28.289541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.430 [2024-12-09 12:00:28.289548] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.430 [2024-12-09 12:00:28.289555] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.430 [2024-12-09 12:00:28.299797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.430 qpair failed and we were unable to recover it. 00:21:20.430 [2024-12-09 12:00:28.309599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.430 [2024-12-09 12:00:28.309639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.430 [2024-12-09 12:00:28.309654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.430 [2024-12-09 12:00:28.309664] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.430 [2024-12-09 12:00:28.309671] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.430 [2024-12-09 12:00:28.319952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.430 qpair failed and we were unable to recover it. 00:21:20.430 [2024-12-09 12:00:28.329661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.430 [2024-12-09 12:00:28.329706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.430 [2024-12-09 12:00:28.329721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.430 [2024-12-09 12:00:28.329728] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.430 [2024-12-09 12:00:28.329734] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.431 [2024-12-09 12:00:28.340065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.431 qpair failed and we were unable to recover it. 00:21:20.431 [2024-12-09 12:00:28.349656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.431 [2024-12-09 12:00:28.349696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.431 [2024-12-09 12:00:28.349712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.431 [2024-12-09 12:00:28.349718] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.431 [2024-12-09 12:00:28.349725] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.431 [2024-12-09 12:00:28.360066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.431 qpair failed and we were unable to recover it. 00:21:20.431 [2024-12-09 12:00:28.369703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.431 [2024-12-09 12:00:28.369746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.431 [2024-12-09 12:00:28.369761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.431 [2024-12-09 12:00:28.369768] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.431 [2024-12-09 12:00:28.369776] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.431 [2024-12-09 12:00:28.380038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.431 qpair failed and we were unable to recover it. 00:21:20.431 [2024-12-09 12:00:28.389707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.431 [2024-12-09 12:00:28.389749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.431 [2024-12-09 12:00:28.389764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.431 [2024-12-09 12:00:28.389771] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.431 [2024-12-09 12:00:28.389777] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.431 [2024-12-09 12:00:28.400242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.431 qpair failed and we were unable to recover it. 00:21:20.431 [2024-12-09 12:00:28.409916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.431 [2024-12-09 12:00:28.409957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.431 [2024-12-09 12:00:28.409972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.431 [2024-12-09 12:00:28.409979] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.431 [2024-12-09 12:00:28.409986] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.431 [2024-12-09 12:00:28.420195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.431 qpair failed and we were unable to recover it. 00:21:20.431 [2024-12-09 12:00:28.429924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.431 [2024-12-09 12:00:28.429968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.431 [2024-12-09 12:00:28.429983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.431 [2024-12-09 12:00:28.429991] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.431 [2024-12-09 12:00:28.429998] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.431 [2024-12-09 12:00:28.440198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.431 qpair failed and we were unable to recover it. 00:21:20.431 [2024-12-09 12:00:28.449909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.431 [2024-12-09 12:00:28.449945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.431 [2024-12-09 12:00:28.449960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.431 [2024-12-09 12:00:28.449967] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.431 [2024-12-09 12:00:28.449974] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.431 [2024-12-09 12:00:28.460314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.431 qpair failed and we were unable to recover it. 00:21:20.431 [2024-12-09 12:00:28.470139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.431 [2024-12-09 12:00:28.470180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.431 [2024-12-09 12:00:28.470195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.431 [2024-12-09 12:00:28.470203] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.431 [2024-12-09 12:00:28.470209] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.431 [2024-12-09 12:00:28.480585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.431 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.490119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.490164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.490179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.490187] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.490193] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.500312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.510150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.510194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.510210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.510217] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.510223] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.520857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.530231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.530268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.530283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.530290] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.530297] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.540563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.550346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.550386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.550401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.550409] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.550415] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.560708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.570322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.570366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.570385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.570392] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.570398] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.580744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.590465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.590508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.590522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.590530] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.590537] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.600748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.610474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.610510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.610525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.610532] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.610538] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.620823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.630534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.630577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.630591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.630598] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.630605] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.640911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.650572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.650613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.650628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.650638] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.650645] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.660874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.670583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.670621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.670636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.670643] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.670649] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.681008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.690708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.690751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.690765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.690773] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.690780] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.700968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.710734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.710775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.710790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.710797] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.691 [2024-12-09 12:00:28.710804] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.691 [2024-12-09 12:00:28.721243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.691 qpair failed and we were unable to recover it. 00:21:20.691 [2024-12-09 12:00:28.730860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.691 [2024-12-09 12:00:28.730900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.691 [2024-12-09 12:00:28.730914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.691 [2024-12-09 12:00:28.730922] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.692 [2024-12-09 12:00:28.730928] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.692 [2024-12-09 12:00:28.741179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.692 qpair failed and we were unable to recover it. 00:21:20.951 [2024-12-09 12:00:28.750895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.951 [2024-12-09 12:00:28.750934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.951 [2024-12-09 12:00:28.750949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.951 [2024-12-09 12:00:28.750956] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.951 [2024-12-09 12:00:28.750963] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.951 [2024-12-09 12:00:28.761330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.951 qpair failed and we were unable to recover it. 00:21:20.951 [2024-12-09 12:00:28.771011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.951 [2024-12-09 12:00:28.771053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.951 [2024-12-09 12:00:28.771068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.951 [2024-12-09 12:00:28.771075] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.951 [2024-12-09 12:00:28.771082] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.951 [2024-12-09 12:00:28.781294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.951 qpair failed and we were unable to recover it. 00:21:20.951 [2024-12-09 12:00:28.791012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.951 [2024-12-09 12:00:28.791054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.951 [2024-12-09 12:00:28.791070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.951 [2024-12-09 12:00:28.791076] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.951 [2024-12-09 12:00:28.791083] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.951 [2024-12-09 12:00:28.801402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.951 qpair failed and we were unable to recover it. 00:21:20.951 [2024-12-09 12:00:28.811136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.951 [2024-12-09 12:00:28.811176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.951 [2024-12-09 12:00:28.811191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.951 [2024-12-09 12:00:28.811198] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.951 [2024-12-09 12:00:28.811205] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.951 [2024-12-09 12:00:28.821435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.951 qpair failed and we were unable to recover it. 00:21:20.951 [2024-12-09 12:00:28.831049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.951 [2024-12-09 12:00:28.831085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.951 [2024-12-09 12:00:28.831100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.951 [2024-12-09 12:00:28.831107] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.951 [2024-12-09 12:00:28.831113] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.951 [2024-12-09 12:00:28.841561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.951 qpair failed and we were unable to recover it. 00:21:20.951 [2024-12-09 12:00:28.851124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.951 [2024-12-09 12:00:28.851163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.951 [2024-12-09 12:00:28.851178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.951 [2024-12-09 12:00:28.851185] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.951 [2024-12-09 12:00:28.851192] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.951 [2024-12-09 12:00:28.861560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.951 qpair failed and we were unable to recover it. 00:21:20.951 [2024-12-09 12:00:28.871270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.951 [2024-12-09 12:00:28.871312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.951 [2024-12-09 12:00:28.871327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.951 [2024-12-09 12:00:28.871334] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.951 [2024-12-09 12:00:28.871341] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.951 [2024-12-09 12:00:28.881572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.951 qpair failed and we were unable to recover it. 00:21:20.951 [2024-12-09 12:00:28.891286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.951 [2024-12-09 12:00:28.891330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.951 [2024-12-09 12:00:28.891345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.951 [2024-12-09 12:00:28.891352] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.951 [2024-12-09 12:00:28.891358] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.951 [2024-12-09 12:00:28.901706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.951 qpair failed and we were unable to recover it. 00:21:20.952 [2024-12-09 12:00:28.911453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.952 [2024-12-09 12:00:28.911495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.952 [2024-12-09 12:00:28.911514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.952 [2024-12-09 12:00:28.911521] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.952 [2024-12-09 12:00:28.911528] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.952 [2024-12-09 12:00:28.921771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.952 qpair failed and we were unable to recover it. 00:21:20.952 [2024-12-09 12:00:28.931423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.952 [2024-12-09 12:00:28.931462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.952 [2024-12-09 12:00:28.931478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.952 [2024-12-09 12:00:28.931485] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.952 [2024-12-09 12:00:28.931492] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.952 [2024-12-09 12:00:28.941762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.952 qpair failed and we were unable to recover it. 00:21:20.952 [2024-12-09 12:00:28.951576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.952 [2024-12-09 12:00:28.951618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.952 [2024-12-09 12:00:28.951633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.952 [2024-12-09 12:00:28.951639] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.952 [2024-12-09 12:00:28.951646] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.952 [2024-12-09 12:00:28.961903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.952 qpair failed and we were unable to recover it. 00:21:20.952 [2024-12-09 12:00:28.971525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.952 [2024-12-09 12:00:28.971572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.952 [2024-12-09 12:00:28.971586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.952 [2024-12-09 12:00:28.971594] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.952 [2024-12-09 12:00:28.971600] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.952 [2024-12-09 12:00:28.981857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.952 qpair failed and we were unable to recover it. 00:21:20.952 [2024-12-09 12:00:28.991568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:20.952 [2024-12-09 12:00:28.991606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:20.952 [2024-12-09 12:00:28.991621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:20.952 [2024-12-09 12:00:28.991629] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:20.952 [2024-12-09 12:00:28.991638] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:20.952 [2024-12-09 12:00:29.001719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:20.952 qpair failed and we were unable to recover it. 00:21:21.210 [2024-12-09 12:00:29.011610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:21.210 [2024-12-09 12:00:29.011656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:21.210 [2024-12-09 12:00:29.011672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:21.210 [2024-12-09 12:00:29.011680] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:21.210 [2024-12-09 12:00:29.011686] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:21.210 [2024-12-09 12:00:29.021872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:21.210 qpair failed and we were unable to recover it. 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Read completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 Write completed with error (sct=0, sc=8) 00:21:22.145 starting I/O failed 00:21:22.145 [2024-12-09 12:00:30.026869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.145 [2024-12-09 12:00:30.034486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:22.145 [2024-12-09 12:00:30.034534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:22.145 [2024-12-09 12:00:30.034552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:22.145 [2024-12-09 12:00:30.034560] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:22.145 [2024-12-09 12:00:30.034571] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b4d80 00:21:22.145 [2024-12-09 12:00:30.045079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.145 qpair failed and we were unable to recover it. 00:21:22.145 [2024-12-09 12:00:30.054716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:22.145 [2024-12-09 12:00:30.054766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:22.145 [2024-12-09 12:00:30.054783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:22.145 [2024-12-09 12:00:30.054790] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:22.145 [2024-12-09 12:00:30.054796] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b4d80 00:21:22.145 [2024-12-09 12:00:30.065116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.145 qpair failed and we were unable to recover it. 00:21:22.145 [2024-12-09 12:00:30.065264] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:21:22.145 A controller has encountered a failure and is being reset. 00:21:22.145 [2024-12-09 12:00:30.065388] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:21:22.145 [2024-12-09 12:00:30.067441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:21:22.145 Controller properly reset. 00:21:23.082 Read completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Write completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Write completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Read completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Write completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Write completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Read completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Read completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Write completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Write completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Write completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Read completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Read completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Write completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Write completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Read completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Read completed with error (sct=0, sc=8) 00:21:23.082 starting I/O failed 00:21:23.082 Read completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Write completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Write completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Read completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Write completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Read completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Read completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Read completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Write completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Read completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Write completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Read completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Write completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Write completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 Read completed with error (sct=0, sc=8) 00:21:23.083 starting I/O failed 00:21:23.083 [2024-12-09 12:00:31.090737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:23.083 Initializing NVMe Controllers 00:21:23.083 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.083 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:23.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:23.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:23.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:23.083 Initialization complete. Launching workers. 00:21:23.083 Starting thread on core 1 00:21:23.083 Starting thread on core 2 00:21:23.083 Starting thread on core 3 00:21:23.083 Starting thread on core 0 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:21:23.342 00:21:23.342 real 0m12.910s 00:21:23.342 user 0m25.035s 00:21:23.342 sys 0m2.502s 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.342 ************************************ 00:21:23.342 END TEST nvmf_target_disconnect_tc2 00:21:23.342 ************************************ 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:23.342 ************************************ 00:21:23.342 START TEST nvmf_target_disconnect_tc3 00:21:23.342 ************************************ 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3323373 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:21:23.342 12:00:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:21:25.242 12:00:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3321952 00:21:25.242 12:00:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Read completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 Write completed with error (sct=0, sc=8) 00:21:26.791 starting I/O failed 00:21:26.791 [2024-12-09 12:00:34.404888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:26.791 [2024-12-09 12:00:34.406611] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:26.791 [2024-12-09 12:00:34.406633] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:26.791 [2024-12-09 12:00:34.406647] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:27.410 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3321952 Killed "${NVMF_APP[@]}" "$@" 00:21:27.410 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:21:27.410 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:27.410 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.410 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.410 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.410 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3324024 00:21:27.410 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3324024 00:21:27.411 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:27.411 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3324024 ']' 00:21:27.411 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.411 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.411 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.411 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.411 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.411 [2024-12-09 12:00:35.274905] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:21:27.411 [2024-12-09 12:00:35.274955] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.411 [2024-12-09 12:00:35.352852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.411 [2024-12-09 12:00:35.392664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.411 [2024-12-09 12:00:35.392704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.411 [2024-12-09 12:00:35.392712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.411 [2024-12-09 12:00:35.392718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.411 [2024-12-09 12:00:35.392723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.411 [2024-12-09 12:00:35.394213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:27.411 [2024-12-09 12:00:35.394323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:27.411 [2024-12-09 12:00:35.394428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:27.411 [2024-12-09 12:00:35.394429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:21:27.411 [2024-12-09 12:00:35.410652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:27.411 qpair failed and we were unable to recover it. 00:21:27.411 [2024-12-09 12:00:35.412286] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:27.411 [2024-12-09 12:00:35.412307] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:27.411 [2024-12-09 12:00:35.412314] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.670 Malloc0 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.670 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.670 [2024-12-09 12:00:35.600984] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d0e9f0/0x1d1a6d0) succeed. 00:21:27.670 [2024-12-09 12:00:35.612708] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d10080/0x1d5bd70) succeed. 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.946 [2024-12-09 12:00:35.760428] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.946 12:00:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3323373 00:21:28.558 [2024-12-09 12:00:36.416476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.558 qpair failed and we were unable to recover it. 00:21:28.558 [2024-12-09 12:00:36.418192] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:28.558 [2024-12-09 12:00:36.418211] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:28.558 [2024-12-09 12:00:36.418218] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:29.546 [2024-12-09 12:00:37.422083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.546 qpair failed and we were unable to recover it. 00:21:29.546 [2024-12-09 12:00:37.423598] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:29.546 [2024-12-09 12:00:37.423616] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:29.546 [2024-12-09 12:00:37.423622] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:30.531 [2024-12-09 12:00:38.427484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.531 qpair failed and we were unable to recover it. 00:21:30.531 [2024-12-09 12:00:38.428898] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:30.531 [2024-12-09 12:00:38.428918] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:30.531 [2024-12-09 12:00:38.428924] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:31.514 [2024-12-09 12:00:39.432805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.514 qpair failed and we were unable to recover it. 00:21:31.514 [2024-12-09 12:00:39.434273] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:31.514 [2024-12-09 12:00:39.434290] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:31.514 [2024-12-09 12:00:39.434296] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:32.523 [2024-12-09 12:00:40.438254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:32.523 qpair failed and we were unable to recover it. 00:21:32.523 [2024-12-09 12:00:40.439650] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:32.523 [2024-12-09 12:00:40.439670] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:32.523 [2024-12-09 12:00:40.439677] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:33.497 [2024-12-09 12:00:41.443525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:33.497 qpair failed and we were unable to recover it. 00:21:33.497 [2024-12-09 12:00:41.445184] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:33.497 [2024-12-09 12:00:41.445202] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:33.497 [2024-12-09 12:00:41.445208] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:21:34.487 [2024-12-09 12:00:42.449139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:34.487 qpair failed and we were unable to recover it. 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Write completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 Read completed with error (sct=0, sc=8) 00:21:35.473 starting I/O failed 00:21:35.473 [2024-12-09 12:00:43.454208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Read completed with error (sct=0, sc=8) 00:21:36.503 starting I/O failed 00:21:36.503 Write completed with error (sct=0, sc=8) 00:21:36.504 starting I/O failed 00:21:36.504 Write completed with error (sct=0, sc=8) 00:21:36.504 starting I/O failed 00:21:36.504 Read completed with error (sct=0, sc=8) 00:21:36.504 starting I/O failed 00:21:36.504 Read completed with error (sct=0, sc=8) 00:21:36.504 starting I/O failed 00:21:36.504 Write completed with error (sct=0, sc=8) 00:21:36.504 starting I/O failed 00:21:36.504 [2024-12-09 12:00:44.459114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:36.504 [2024-12-09 12:00:44.460605] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:36.504 [2024-12-09 12:00:44.460624] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:36.504 [2024-12-09 12:00:44.460631] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:21:37.439 [2024-12-09 12:00:45.464438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.439 qpair failed and we were unable to recover it. 00:21:37.439 [2024-12-09 12:00:45.465984] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.439 [2024-12-09 12:00:45.466001] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.440 [2024-12-09 12:00:45.466008] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:21:38.815 [2024-12-09 12:00:46.469698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:38.815 qpair failed and we were unable to recover it. 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Read completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 Write completed with error (sct=0, sc=8) 00:21:39.747 starting I/O failed 00:21:39.747 [2024-12-09 12:00:47.474790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.747 [2024-12-09 12:00:47.476409] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:39.747 [2024-12-09 12:00:47.476427] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:39.747 [2024-12-09 12:00:47.476433] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b4d80 00:21:40.679 [2024-12-09 12:00:48.480389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.679 qpair failed and we were unable to recover it. 00:21:40.679 [2024-12-09 12:00:48.481812] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:40.679 [2024-12-09 12:00:48.481830] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:40.679 [2024-12-09 12:00:48.481836] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b4d80 00:21:41.613 [2024-12-09 12:00:49.485806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.614 qpair failed and we were unable to recover it. 00:21:41.614 [2024-12-09 12:00:49.485937] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:21:41.614 A controller has encountered a failure and is being reset. 00:21:41.614 Resorting to new failover address 192.168.100.9 00:21:41.614 [2024-12-09 12:00:49.488058] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:41.614 [2024-12-09 12:00:49.488111] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:41.614 [2024-12-09 12:00:49.488134] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:21:42.548 [2024-12-09 12:00:50.491982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.548 qpair failed and we were unable to recover it. 00:21:42.548 [2024-12-09 12:00:50.493541] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:42.548 [2024-12-09 12:00:50.493559] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:42.548 [2024-12-09 12:00:50.493565] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:21:43.482 [2024-12-09 12:00:51.497523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:21:43.482 qpair failed and we were unable to recover it. 00:21:43.482 [2024-12-09 12:00:51.497629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:43.482 [2024-12-09 12:00:51.497737] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:21:43.482 [2024-12-09 12:00:51.529468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:43.742 Controller properly reset. 00:21:43.742 Initializing NVMe Controllers 00:21:43.742 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.742 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.742 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:43.742 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:43.742 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:43.742 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:43.742 Initialization complete. Launching workers. 00:21:43.742 Starting thread on core 1 00:21:43.742 Starting thread on core 2 00:21:43.742 Starting thread on core 3 00:21:43.742 Starting thread on core 0 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:21:43.742 00:21:43.742 real 0m20.385s 00:21:43.742 user 1m6.857s 00:21:43.742 sys 0m4.834s 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.742 ************************************ 00:21:43.742 END TEST nvmf_target_disconnect_tc3 00:21:43.742 ************************************ 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:43.742 rmmod nvme_rdma 00:21:43.742 rmmod nvme_fabrics 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3324024 ']' 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3324024 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3324024 ']' 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3324024 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3324024 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3324024' 00:21:43.742 killing process with pid 3324024 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3324024 00:21:43.742 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3324024 00:21:44.001 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:44.001 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:44.001 00:21:44.001 real 0m41.097s 00:21:44.001 user 2m40.613s 00:21:44.001 sys 0m12.361s 00:21:44.001 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.001 12:00:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:44.001 ************************************ 00:21:44.001 END TEST nvmf_target_disconnect 00:21:44.001 ************************************ 00:21:44.001 12:00:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:44.001 00:21:44.001 real 5m11.278s 00:21:44.001 user 12m49.408s 00:21:44.001 sys 1m25.305s 00:21:44.001 12:00:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.001 12:00:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.001 ************************************ 00:21:44.001 END TEST nvmf_host 00:21:44.001 ************************************ 00:21:44.260 12:00:52 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:21:44.260 00:21:44.260 real 16m31.475s 00:21:44.260 user 41m6.265s 00:21:44.260 sys 4m42.157s 00:21:44.260 12:00:52 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.260 12:00:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:44.260 ************************************ 00:21:44.260 END TEST nvmf_rdma 00:21:44.260 ************************************ 00:21:44.260 12:00:52 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:44.260 12:00:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.260 12:00:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.260 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:21:44.260 ************************************ 00:21:44.260 START TEST spdkcli_nvmf_rdma 00:21:44.260 ************************************ 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:44.260 * Looking for test storage... 00:21:44.260 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:44.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.260 --rc genhtml_branch_coverage=1 00:21:44.260 --rc genhtml_function_coverage=1 00:21:44.260 --rc genhtml_legend=1 00:21:44.260 --rc geninfo_all_blocks=1 00:21:44.260 --rc geninfo_unexecuted_blocks=1 00:21:44.260 00:21:44.260 ' 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:44.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.260 --rc genhtml_branch_coverage=1 00:21:44.260 --rc genhtml_function_coverage=1 00:21:44.260 --rc genhtml_legend=1 00:21:44.260 --rc geninfo_all_blocks=1 00:21:44.260 --rc geninfo_unexecuted_blocks=1 00:21:44.260 00:21:44.260 ' 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:44.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.260 --rc genhtml_branch_coverage=1 00:21:44.260 --rc genhtml_function_coverage=1 00:21:44.260 --rc genhtml_legend=1 00:21:44.260 --rc geninfo_all_blocks=1 00:21:44.260 --rc geninfo_unexecuted_blocks=1 00:21:44.260 00:21:44.260 ' 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:44.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.260 --rc genhtml_branch_coverage=1 00:21:44.260 --rc genhtml_function_coverage=1 00:21:44.260 --rc genhtml_legend=1 00:21:44.260 --rc geninfo_all_blocks=1 00:21:44.260 --rc geninfo_unexecuted_blocks=1 00:21:44.260 00:21:44.260 ' 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.260 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.520 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3326983 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3326983 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 3326983 ']' 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.520 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:44.520 [2024-12-09 12:00:52.384178] Starting SPDK v25.01-pre git sha1 3fe025922 / DPDK 24.03.0 initialization... 00:21:44.520 [2024-12-09 12:00:52.384225] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326983 ] 00:21:44.520 [2024-12-09 12:00:52.459487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:44.520 [2024-12-09 12:00:52.502275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.520 [2024-12-09 12:00:52.502277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.779 12:00:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:50.046 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:50.046 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:50.046 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:50.047 Found net devices under 0000:da:00.0: mlx_0_0 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:50.047 Found net devices under 0000:da:00.1: mlx_0_1 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:50.047 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:50.306 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:50.306 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:21:50.306 altname enp218s0f0np0 00:21:50.306 altname ens818f0np0 00:21:50.306 inet 192.168.100.8/24 scope global mlx_0_0 00:21:50.306 valid_lft forever preferred_lft forever 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:50.306 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:50.306 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:21:50.306 altname enp218s0f1np1 00:21:50.306 altname ens818f1np1 00:21:50.306 inet 192.168.100.9/24 scope global mlx_0_1 00:21:50.306 valid_lft forever preferred_lft forever 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:50.306 192.168.100.9' 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:50.306 192.168.100.9' 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:50.306 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:50.306 192.168.100.9' 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:50.307 12:00:58 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:50.307 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:50.307 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:50.307 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:50.307 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:50.307 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:50.307 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:50.307 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:50.307 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:50.307 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:50.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:50.307 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:50.307 ' 00:21:53.592 [2024-12-09 12:01:00.967423] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9030f0/0x7efd00) succeed. 00:21:53.592 [2024-12-09 12:01:00.977817] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9047d0/0x86fd40) succeed. 00:21:54.528 [2024-12-09 12:01:02.372602] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:21:57.061 [2024-12-09 12:01:04.864722] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:21:59.591 [2024-12-09 12:01:07.019865] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:22:00.968 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:00.968 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:00.968 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:00.968 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:00.968 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:00.968 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:00.968 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:00.968 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:00.968 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:00.968 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:00.968 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:00.968 12:01:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:00.968 12:01:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.968 12:01:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:00.968 12:01:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:00.968 12:01:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.968 12:01:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:00.968 12:01:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:22:00.968 12:01:08 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:01.227 12:01:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:01.227 12:01:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:01.227 12:01:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:01.227 12:01:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.227 12:01:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:01.485 12:01:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:01.485 12:01:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.485 12:01:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:01.485 12:01:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:01.485 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:01.485 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:01.486 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:01.486 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:22:01.486 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:22:01.486 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:01.486 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:01.486 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:01.486 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:01.486 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:01.486 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:01.486 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:01.486 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:01.486 ' 00:22:06.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:06.755 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:06.755 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:06.755 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:06.755 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:22:06.755 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:22:06.755 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:06.755 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:06.755 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:06.755 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:06.755 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:06.755 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:06.755 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:06.755 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3326983 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 3326983 ']' 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 3326983 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3326983 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3326983' 00:22:07.014 killing process with pid 3326983 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 3326983 00:22:07.014 12:01:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 3326983 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:07.273 rmmod nvme_rdma 00:22:07.273 rmmod nvme_fabrics 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:07.273 00:22:07.273 real 0m23.108s 00:22:07.273 user 0m51.181s 00:22:07.273 sys 0m4.994s 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.273 12:01:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:07.273 ************************************ 00:22:07.273 END TEST spdkcli_nvmf_rdma 00:22:07.273 ************************************ 00:22:07.273 12:01:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:07.273 12:01:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:07.273 12:01:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:07.273 12:01:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:07.273 12:01:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:07.273 12:01:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:07.273 12:01:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:07.273 12:01:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.273 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:22:07.273 12:01:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:07.273 12:01:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:07.273 12:01:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:07.273 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:22:12.546 INFO: APP EXITING 00:22:12.546 INFO: killing all VMs 00:22:12.546 INFO: killing vhost app 00:22:12.546 INFO: EXIT DONE 00:22:15.081 Waiting for block devices as requested 00:22:15.081 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:22:15.081 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:15.081 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:15.081 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:15.081 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:15.340 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:15.340 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:15.340 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:15.599 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:15.599 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:15.599 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:15.599 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:15.858 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:15.858 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:15.858 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:16.117 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:16.117 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:19.406 Cleaning 00:22:19.406 Removing: /var/run/dpdk/spdk0/config 00:22:19.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:19.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:19.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:19.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:19.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:22:19.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:22:19.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:22:19.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:22:19.406 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:19.406 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:19.406 Removing: /var/run/dpdk/spdk1/config 00:22:19.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:19.407 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:19.407 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:19.407 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:19.407 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:22:19.407 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:22:19.407 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:22:19.407 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:22:19.407 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:19.407 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:19.407 Removing: /var/run/dpdk/spdk1/mp_socket 00:22:19.407 Removing: /var/run/dpdk/spdk2/config 00:22:19.407 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:19.407 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:19.407 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:19.407 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:19.407 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:22:19.407 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:22:19.407 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:22:19.407 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:22:19.407 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:19.407 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:19.407 Removing: /var/run/dpdk/spdk3/config 00:22:19.407 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:19.407 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:19.407 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:19.407 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:19.407 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:22:19.407 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:22:19.407 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:22:19.407 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:22:19.407 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:19.407 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:19.407 Removing: /var/run/dpdk/spdk4/config 00:22:19.407 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:19.407 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:19.407 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:19.407 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:19.407 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:22:19.407 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:22:19.407 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:22:19.407 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:22:19.407 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:19.407 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:19.407 Removing: /dev/shm/bdevperf_trace.pid3085750 00:22:19.407 Removing: /dev/shm/bdev_svc_trace.1 00:22:19.407 Removing: /dev/shm/nvmf_trace.0 00:22:19.407 Removing: /dev/shm/spdk_tgt_trace.pid3043724 00:22:19.407 Removing: /var/run/dpdk/spdk0 00:22:19.407 Removing: /var/run/dpdk/spdk1 00:22:19.407 Removing: /var/run/dpdk/spdk2 00:22:19.407 Removing: /var/run/dpdk/spdk3 00:22:19.407 Removing: /var/run/dpdk/spdk4 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3041365 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3042426 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3043724 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3044261 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3045188 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3045332 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3046305 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3046416 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3046671 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3051423 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3052922 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3053213 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3053514 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3053812 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3054104 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3054362 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3054615 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3054896 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3055640 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3058642 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3058899 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3059155 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3059158 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3059652 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3059663 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3060151 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3060160 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3060429 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3060646 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3060766 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3060909 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3061353 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3061530 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3061868 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3065753 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3069773 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3080316 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3081226 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3085750 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3086060 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3090011 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3095670 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3098379 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3107915 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3131766 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3135376 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3175972 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3180925 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3186955 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3195408 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3234160 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3235003 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3236084 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3237161 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3241510 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3247232 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3253876 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3254787 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3255698 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3256617 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3257072 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3261300 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3261305 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3265570 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3266167 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3266715 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3267403 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3267432 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3272624 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3273189 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3277208 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3279838 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3285192 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3294974 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3295000 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3314408 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3314644 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3320975 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3321261 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3323373 00:22:19.407 Removing: /var/run/dpdk/spdk_pid3326983 00:22:19.407 Clean 00:22:19.666 12:01:27 -- common/autotest_common.sh@1453 -- # return 0 00:22:19.666 12:01:27 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:19.666 12:01:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.666 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:22:19.666 12:01:27 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:19.666 12:01:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.666 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:22:19.666 12:01:27 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:22:19.666 12:01:27 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:22:19.666 12:01:27 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:22:19.666 12:01:27 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:19.666 12:01:27 -- spdk/autotest.sh@398 -- # hostname 00:22:19.666 12:01:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:22:19.924 geninfo: WARNING: invalid characters removed from testname! 00:22:41.862 12:01:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:41.862 12:01:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:43.238 12:01:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:45.141 12:01:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:47.043 12:01:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:48.947 12:01:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:50.324 12:01:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:50.324 12:01:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:50.324 12:01:58 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:22:50.324 12:01:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:50.324 12:01:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:50.324 12:01:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:22:50.324 + [[ -n 2964855 ]] 00:22:50.324 + sudo kill 2964855 00:22:50.334 [Pipeline] } 00:22:50.349 [Pipeline] // stage 00:22:50.354 [Pipeline] } 00:22:50.368 [Pipeline] // timeout 00:22:50.373 [Pipeline] } 00:22:50.386 [Pipeline] // catchError 00:22:50.391 [Pipeline] } 00:22:50.406 [Pipeline] // wrap 00:22:50.412 [Pipeline] } 00:22:50.425 [Pipeline] // catchError 00:22:50.434 [Pipeline] stage 00:22:50.437 [Pipeline] { (Epilogue) 00:22:50.449 [Pipeline] catchError 00:22:50.451 [Pipeline] { 00:22:50.464 [Pipeline] echo 00:22:50.466 Cleanup processes 00:22:50.471 [Pipeline] sh 00:22:50.759 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:50.759 3341086 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:50.772 [Pipeline] sh 00:22:51.057 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:51.057 ++ grep -v 'sudo pgrep' 00:22:51.057 ++ awk '{print $1}' 00:22:51.057 + sudo kill -9 00:22:51.057 + true 00:22:51.069 [Pipeline] sh 00:22:51.353 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:59.480 [Pipeline] sh 00:22:59.764 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:59.764 Artifacts sizes are good 00:22:59.778 [Pipeline] archiveArtifacts 00:22:59.785 Archiving artifacts 00:22:59.909 [Pipeline] sh 00:23:00.193 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:23:00.209 [Pipeline] cleanWs 00:23:00.218 [WS-CLEANUP] Deleting project workspace... 00:23:00.218 [WS-CLEANUP] Deferred wipeout is used... 00:23:00.224 [WS-CLEANUP] done 00:23:00.226 [Pipeline] } 00:23:00.244 [Pipeline] // catchError 00:23:00.255 [Pipeline] sh 00:23:00.538 + logger -p user.info -t JENKINS-CI 00:23:00.546 [Pipeline] } 00:23:00.559 [Pipeline] // stage 00:23:00.564 [Pipeline] } 00:23:00.579 [Pipeline] // node 00:23:00.584 [Pipeline] End of Pipeline 00:23:00.617 Finished: SUCCESS